Activity cliffs(ACs)are generally defined as pairs of similar compounds that only differ by a minor structural modification but exhibit a large difference in their binding affinity for a given target.ACs offer crucial...Activity cliffs(ACs)are generally defined as pairs of similar compounds that only differ by a minor structural modification but exhibit a large difference in their binding affinity for a given target.ACs offer crucial insights that aid medicinal chemists in optimizing molecular structures.Nonetheless,they also form a major source of prediction error in structure-activity relationship(SAR)models.To date,several studies have demonstrated that deep neural networks based on molecular images or graphs might need to be improved further in predicting the potency of ACs.In this paper,we integrated the triplet loss in face recognition with pre-training strategy to develop a prediction model ACtriplet,tailored for ACs.Through extensive comparison with multiple baseline models on 30 benchmark datasets,the results showed that ACtriplet was significantly better than those deep learning(DL)models without pretraining.In addition,we explored the effect of pre-training on data representation.Finally,the case study demonstrated that our model's interpretability module could explain the prediction results reasonably.In the dilemma that the amount of data could not be increased rapidly,this innovative framework would better make use of the existing data,which would propel the potential of DL in the early stage of drug discovery and optimization.展开更多
针对非侵入式负荷分解方法负荷特征捕捉不足、负荷分解精度不够等问题,文章提出一种基于改进BERT(bidirectional encoder representations from transformers)模型的多头自注意力非侵入式负荷分解方法(frequency and temporal attention...针对非侵入式负荷分解方法负荷特征捕捉不足、负荷分解精度不够等问题,文章提出一种基于改进BERT(bidirectional encoder representations from transformers)模型的多头自注意力非侵入式负荷分解方法(frequency and temporal attention-BERT, FAT-BERT)。首先通过傅里叶变换将时域数据转换为频域数据,采用多尺度卷积全面捕捉负荷信号的时域和频域特征,从而增强模型对多样化负荷信号的表达能力;其次,在多头自注意力机制中引入频率注意力机制,从而增强模型对时序数据中频率成分的感知能力,进一步改善复杂负荷模式的表示,改进BERT模型中增加局部自注意力从而减少不必要的全局计算,提升模型的运行速度;接着将残差连接和正则化技术结合使模型在训练过程中更加稳定,并且能够更好地避免过拟合,最后在REDD和UK-DALE数据集上对提出的方法进行实验,实验结果验证了所提方法的有效性。展开更多
针对“卡脖子”技术研究存在替代技术识别机制缺失与技术要素解析精度不足等局限,文章提出融合提示工程与BERT-LSTM模型的“卡脖子”替代技术识别方法。首先,基于商业管制清单(Commercial Control List,CCL)对ECCN物项进行解析,并开展...针对“卡脖子”技术研究存在替代技术识别机制缺失与技术要素解析精度不足等局限,文章提出融合提示工程与BERT-LSTM模型的“卡脖子”替代技术识别方法。首先,基于商业管制清单(Commercial Control List,CCL)对ECCN物项进行解析,并开展专利检索工作,通过SPC算法提取技术主路径的关键核心专利;其次,运用大语言模型提示工程抽取“问题-方案对”,借此解析技术功效,并结合功能导向搜索(Function-Oriented Search,FOS)初步查找可能具备技术替代功效的相关专利;再次,采用BERT-LSTM模型对专利文本实施二元分类,精准识别出具备技术替代功效的专利样本;通过提示工程抽取“方案-类别对”,系统识别替代技术方案;最后,建立科学-产业双维度评估体系完成替代技术潜力分级。文章以光刻技术为例,阐述该识别方法的应用流程,系统识别出极紫外(Extreme Ultra-violet,EUV)光刻技术的五种替代技术及其替代潜力。展开更多
方面级情感分析旨在识别文本中针对特定方面的情感倾向,然而现有研究仍面临多重挑战:基于BERT的方面级情感分析研究存在语义过拟合、低层级语义利用不足的问题;自注意力机制存在局部信息丢失的问题;多编码层和多粒度语义的结构存在信息...方面级情感分析旨在识别文本中针对特定方面的情感倾向,然而现有研究仍面临多重挑战:基于BERT的方面级情感分析研究存在语义过拟合、低层级语义利用不足的问题;自注意力机制存在局部信息丢失的问题;多编码层和多粒度语义的结构存在信息冗余问题。为此,提出一种融合BERT编码层的多粒度语义方面级情感分析模型(multi-granular semantic aspect-based sentiment analysis model with fusion of BERT encoding layers,MSBEL)。具体地,引入金字塔注意力机制,利用各个编码层的语义特征,并结合低层编码器以降低过拟合;通过多尺度门控卷积增强模型处理局部信息丢失的能力;使用余弦注意力突出与方面词相关的情感特征,从而减少信息冗余。t-SNE的可视化分析表明,MSBEL的情感表示聚类效果优于BERT。此外,在多个基准数据集上将本文模型与主流模型的性能进行了对比,结果显示:与LCF-BERT相比,本文模型在5个数据集上的F1分别提升了1.53%、3.94%、1.39%、6.68%、5.97%;与SenticGCN相比,本文模型的F1平均提升0.94%,最大提升2.12%;与ABSA-DeBERTa相比,本文模型的F1平均提升1.16%,最大提升4.20%,验证了本文模型在方面级情感分析任务上的有效性和优越性。展开更多
Dialectal Arabic text classifcation(DA-TC)provides a mechanism for performing sentiment analysis on recent Arabic social media leading to many challenges owing to the natural morphology of the Arabic language and its ...Dialectal Arabic text classifcation(DA-TC)provides a mechanism for performing sentiment analysis on recent Arabic social media leading to many challenges owing to the natural morphology of the Arabic language and its wide range of dialect variations.Te availability of annotated datasets is limited,and preprocessing of the noisy content is even more challenging,sometimes resulting in the removal of important cues of sentiment from the input.To overcome such problems,this study investigates the applicability of using transfer learning based on pre-trained transformer models to classify sentiment in Arabic texts with high accuracy.Specifcally,it uses the CAMeLBERT model fnetuned for the Multi-Domain Arabic Resources for Sentiment Analysis(MARSA)dataset containing more than 56,000 manually annotated tweets annotated across political,social,sports,and technology domains.Te proposed method avoids extensive use of preprocessing and shows that raw data provides better results because they tend to retain more linguistic features.Te fne-tuned CAMeLBERT model produces state-of-the-art accuracy of 92%,precision of 91.7%,recall of 92.3%,and F1-score of 91.5%,outperforming standard machine learning models and ensemble-based/deep learning techniques.Our performance comparisons against other pre-trained models,namely AraBERTv02-twitter and MARBERT,show that transformer-based architectures are consistently the best suited when dealing with noisy Arabic texts.Tis work leads to a strong remedy for the problems in Arabic sentiment analysis and provides recommendations on easy tuning of the pre-trained models to adapt to challenging linguistic features and domain-specifc tasks.展开更多
随着自然语言处理技术的迅猛发展,BERT(Bidirectional Encoder Representations from Transformers)算法脱颖而出,为烟草行业监察监督信息共享机制的优化提供了新的思路和方法。该算法的运用有望打破烟草行业监察监督过程中的信息流通阻...随着自然语言处理技术的迅猛发展,BERT(Bidirectional Encoder Representations from Transformers)算法脱颖而出,为烟草行业监察监督信息共享机制的优化提供了新的思路和方法。该算法的运用有望打破烟草行业监察监督过程中的信息流通阻碍,进而优化信息处理流程,提高共享效率与质量。研究聚焦烟草行业监察监督的现状及存在的问题,提出了基于BERT算法的烟草行业监察监督信息共享机制优化方案,旨在为烟草行业监察监督工作的高效开展提供理论支持与实践参考。展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.:U23A20530,82273858,and 82173746)the National Key Research and Development Programof China(Grant No.:2023YFF1204904)Shanghai Frontiers Science Center of Optogenetic Techniques for Cell Metabolism(Shanghai Municipal Education Commission,China).
文摘Activity cliffs(ACs)are generally defined as pairs of similar compounds that only differ by a minor structural modification but exhibit a large difference in their binding affinity for a given target.ACs offer crucial insights that aid medicinal chemists in optimizing molecular structures.Nonetheless,they also form a major source of prediction error in structure-activity relationship(SAR)models.To date,several studies have demonstrated that deep neural networks based on molecular images or graphs might need to be improved further in predicting the potency of ACs.In this paper,we integrated the triplet loss in face recognition with pre-training strategy to develop a prediction model ACtriplet,tailored for ACs.Through extensive comparison with multiple baseline models on 30 benchmark datasets,the results showed that ACtriplet was significantly better than those deep learning(DL)models without pretraining.In addition,we explored the effect of pre-training on data representation.Finally,the case study demonstrated that our model's interpretability module could explain the prediction results reasonably.In the dilemma that the amount of data could not be increased rapidly,this innovative framework would better make use of the existing data,which would propel the potential of DL in the early stage of drug discovery and optimization.
文摘针对非侵入式负荷分解方法负荷特征捕捉不足、负荷分解精度不够等问题,文章提出一种基于改进BERT(bidirectional encoder representations from transformers)模型的多头自注意力非侵入式负荷分解方法(frequency and temporal attention-BERT, FAT-BERT)。首先通过傅里叶变换将时域数据转换为频域数据,采用多尺度卷积全面捕捉负荷信号的时域和频域特征,从而增强模型对多样化负荷信号的表达能力;其次,在多头自注意力机制中引入频率注意力机制,从而增强模型对时序数据中频率成分的感知能力,进一步改善复杂负荷模式的表示,改进BERT模型中增加局部自注意力从而减少不必要的全局计算,提升模型的运行速度;接着将残差连接和正则化技术结合使模型在训练过程中更加稳定,并且能够更好地避免过拟合,最后在REDD和UK-DALE数据集上对提出的方法进行实验,实验结果验证了所提方法的有效性。
文摘针对“卡脖子”技术研究存在替代技术识别机制缺失与技术要素解析精度不足等局限,文章提出融合提示工程与BERT-LSTM模型的“卡脖子”替代技术识别方法。首先,基于商业管制清单(Commercial Control List,CCL)对ECCN物项进行解析,并开展专利检索工作,通过SPC算法提取技术主路径的关键核心专利;其次,运用大语言模型提示工程抽取“问题-方案对”,借此解析技术功效,并结合功能导向搜索(Function-Oriented Search,FOS)初步查找可能具备技术替代功效的相关专利;再次,采用BERT-LSTM模型对专利文本实施二元分类,精准识别出具备技术替代功效的专利样本;通过提示工程抽取“方案-类别对”,系统识别替代技术方案;最后,建立科学-产业双维度评估体系完成替代技术潜力分级。文章以光刻技术为例,阐述该识别方法的应用流程,系统识别出极紫外(Extreme Ultra-violet,EUV)光刻技术的五种替代技术及其替代潜力。
文摘方面级情感分析旨在识别文本中针对特定方面的情感倾向,然而现有研究仍面临多重挑战:基于BERT的方面级情感分析研究存在语义过拟合、低层级语义利用不足的问题;自注意力机制存在局部信息丢失的问题;多编码层和多粒度语义的结构存在信息冗余问题。为此,提出一种融合BERT编码层的多粒度语义方面级情感分析模型(multi-granular semantic aspect-based sentiment analysis model with fusion of BERT encoding layers,MSBEL)。具体地,引入金字塔注意力机制,利用各个编码层的语义特征,并结合低层编码器以降低过拟合;通过多尺度门控卷积增强模型处理局部信息丢失的能力;使用余弦注意力突出与方面词相关的情感特征,从而减少信息冗余。t-SNE的可视化分析表明,MSBEL的情感表示聚类效果优于BERT。此外,在多个基准数据集上将本文模型与主流模型的性能进行了对比,结果显示:与LCF-BERT相比,本文模型在5个数据集上的F1分别提升了1.53%、3.94%、1.39%、6.68%、5.97%;与SenticGCN相比,本文模型的F1平均提升0.94%,最大提升2.12%;与ABSA-DeBERTa相比,本文模型的F1平均提升1.16%,最大提升4.20%,验证了本文模型在方面级情感分析任务上的有效性和优越性。
基金funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2504).
文摘Dialectal Arabic text classifcation(DA-TC)provides a mechanism for performing sentiment analysis on recent Arabic social media leading to many challenges owing to the natural morphology of the Arabic language and its wide range of dialect variations.Te availability of annotated datasets is limited,and preprocessing of the noisy content is even more challenging,sometimes resulting in the removal of important cues of sentiment from the input.To overcome such problems,this study investigates the applicability of using transfer learning based on pre-trained transformer models to classify sentiment in Arabic texts with high accuracy.Specifcally,it uses the CAMeLBERT model fnetuned for the Multi-Domain Arabic Resources for Sentiment Analysis(MARSA)dataset containing more than 56,000 manually annotated tweets annotated across political,social,sports,and technology domains.Te proposed method avoids extensive use of preprocessing and shows that raw data provides better results because they tend to retain more linguistic features.Te fne-tuned CAMeLBERT model produces state-of-the-art accuracy of 92%,precision of 91.7%,recall of 92.3%,and F1-score of 91.5%,outperforming standard machine learning models and ensemble-based/deep learning techniques.Our performance comparisons against other pre-trained models,namely AraBERTv02-twitter and MARBERT,show that transformer-based architectures are consistently the best suited when dealing with noisy Arabic texts.Tis work leads to a strong remedy for the problems in Arabic sentiment analysis and provides recommendations on easy tuning of the pre-trained models to adapt to challenging linguistic features and domain-specifc tasks.
文摘随着自然语言处理技术的迅猛发展,BERT(Bidirectional Encoder Representations from Transformers)算法脱颖而出,为烟草行业监察监督信息共享机制的优化提供了新的思路和方法。该算法的运用有望打破烟草行业监察监督过程中的信息流通阻碍,进而优化信息处理流程,提高共享效率与质量。研究聚焦烟草行业监察监督的现状及存在的问题,提出了基于BERT算法的烟草行业监察监督信息共享机制优化方案,旨在为烟草行业监察监督工作的高效开展提供理论支持与实践参考。