期刊文献+
共找到326篇文章
< 1 2 17 >
每页显示 20 50 100
A Knowledge-Enhanced Disease Diagnosis Method Based on Prompt Learning and BERT Integration
1
作者 Zheng Zhang Hengyang Wu Na Wang 《Journal on Artificial Intelligence》 2025年第1期17-37,共21页
This paper proposes a knowledge-enhanced disease diagnosis method based on a prompt learning framework.Addressing challenges such as the complexity ofmedical terminology,the difficulty of constructingmedical knowledge... This paper proposes a knowledge-enhanced disease diagnosis method based on a prompt learning framework.Addressing challenges such as the complexity ofmedical terminology,the difficulty of constructingmedical knowledge graphs,and the scarcity of medical data,the method retrieves structured knowledge from clinical cases via external knowledge graphs.The method retrieves structured knowledge from external knowledge graphs related to clinical cases,encodes it,and injects it into the prompt templates to enhance the language model’s understanding and reasoning capabilities for the task.We conducted experiments on three public datasets:CHIP-CTC,IMCS-V2-NER,and KUAKE-QTR.The results indicate that the proposedmethod significantly outperforms existing models acrossmultiple evaluation metrics.Additionally,ablation studies confirmed the critical role of the knowledge injection module,as the removal of this module resulted in a significant drop in F1 score.The experimental results demonstrate that the proposed method not only effectively improves the accuracy of disease diagnosis but also enhances the interpretability of the predictions,providing more reliable support and evidence for clinical diagnosis. 展开更多
关键词 Knowledge enhancement disease diagnosis prompt learning BERT knowledge graph
在线阅读 下载PDF
PromptFusion:Harmonized Semantic Prompt Learning for Infrared and Visible Image Fusion
2
作者 Jinyuan Liu Xingyuan Li +4 位作者 Zirui Wang Zhiying Jiang Wei Zhong Wei Fan Bin Xu 《IEEE/CAA Journal of Automatica Sinica》 2025年第3期502-515,共14页
The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively han... The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively handle modal disparities,resulting in visual degradation of the details and prominent targets of the fused images.To address these challenges,we introduce Prompt Fusion,a prompt-based approach that harmoniously combines multi-modality images under the guidance of semantic prompts.Firstly,to better characterize the features of different modalities,a contourlet autoencoder is designed to separate and extract the high-/low-frequency components of different modalities,thereby improving the extraction of fine details and textures.We also introduce a prompt learning mechanism using positive and negative prompts,leveraging Vision-Language Models to improve the fusion model's understanding and identification of targets in multi-modality images,leading to improved performance in downstream tasks.Furthermore,we employ bi-level asymptotic convergence optimization.This approach simplifies the intricate non-singleton non-convex bi-level problem into a series of convergent and differentiable single optimization problems that can be effectively resolved through gradient descent.Our approach advances the state-of-the-art,delivering superior fusion quality and boosting the performance of related downstream tasks.Project page:https://github.com/hey-it-s-me/PromptFusion. 展开更多
关键词 Bi-level optimization image fusion infrared and visible image prompt learning
在线阅读 下载PDF
Low Resource Chinese Geological Text Named Entity Recognition Based on Prompt Learning 被引量:2
3
作者 Hang He Chao Ma +6 位作者 Shan Ye Wenqiang Tang Yuxuan Zhou Zhen Yu Jiaxin Yi Li Hou Mingcai Hou 《Journal of Earth Science》 SCIE CAS CSCD 2024年第3期1035-1043,共9页
Geological reports are a significant accomplishment for geologists involved in geological investigations and scientific research as they contain rich data and textual information.With the rapid development of science ... Geological reports are a significant accomplishment for geologists involved in geological investigations and scientific research as they contain rich data and textual information.With the rapid development of science and technology,a large number of textual reports have accumulated in the field of geology.However,many non-hot topics and non-English speaking regions are neglected in mainstream geoscience databases for geological information mining,making it more challenging for some researchers to extract necessary information from these texts.Natural Language Processing(NLP)has obvious advantages in processing large amounts of textual data.The objective of this paper is to identify geological named entities from Chinese geological texts using NLP techniques.We propose the RoBERTa-Prompt-Tuning-NER method,which leverages the concept of Prompt Learning and requires only a small amount of annotated data to train superior models for recognizing geological named entities in low-resource dataset configurations.The RoBERTa layer captures context-based information and longer-distance dependencies through dynamic word vectors.Finally,we conducted experiments on the constructed Geological Named Entity Recognition(GNER)dataset.Our experimental results show that the proposed model achieves the highest F1 score of 80.64%among the four baseline algorithms,demonstrating the reliability and robustness of using the model for Named Entity Recognition of geological texts. 展开更多
关键词 prompt learning Named Entity Recognition(NER) low resource geological text text information mining big data geology.
原文传递
文本分类中Prompt Learning方法研究综述 被引量:5
4
作者 顾勋勋 刘建平 +1 位作者 邢嘉璐 任海玉 《计算机工程与应用》 CSCD 北大核心 2024年第11期50-61,共12页
文本分类是自然语言处理中的一项基础任务,在情感分析、新闻分类等领域具有重要应用。相较于传统的机器学习和深度学习模型,提示学习可以在数据不足的情况下通过构建提示来进行文本分类。近年来,GPT-3的出现推动了提示学习方法的发展,... 文本分类是自然语言处理中的一项基础任务,在情感分析、新闻分类等领域具有重要应用。相较于传统的机器学习和深度学习模型,提示学习可以在数据不足的情况下通过构建提示来进行文本分类。近年来,GPT-3的出现推动了提示学习方法的发展,并且在文本分类领域取得了显著的进展。对以往的文本分类方法进行简要梳理,分析其存在的问题与不足;阐述了提示学习的发展进程,以及构建提示模板的方法,并对用于文本分类的提示学习方法研究及成果进行了介绍和总结。最后,对提示学习在文本分类领域的发展趋势和有待进一步研究的难点进行了总结和展望。 展开更多
关键词 提示学习 文本分类 情绪分析 新闻分类
在线阅读 下载PDF
Dual modality prompt learning for visual question-grounded answering in robotic surgery 被引量:1
5
作者 Yue Zhang Wanshu Fan +3 位作者 Peixi Peng Xin Yang Dongsheng Zhou Xiaopeng Wei 《Visual Computing for Industry,Biomedicine,and Art》 2024年第1期316-328,共13页
With recent advancements in robotic surgery,notable strides have been made in visual question answering(VQA).Existing VQA systems typically generate textual answers to questions but fail to indicate the location of th... With recent advancements in robotic surgery,notable strides have been made in visual question answering(VQA).Existing VQA systems typically generate textual answers to questions but fail to indicate the location of the relevant content within the image.This limitation restricts the interpretative capacity of the VQA models and their abil-ity to explore specific image regions.To address this issue,this study proposes a grounded VQA model for robotic surgery,capable of localizing a specific region during answer prediction.Drawing inspiration from prompt learning in language models,a dual-modality prompt model was developed to enhance precise multimodal information interactions.Specifically,two complementary prompters were introduced to effectively integrate visual and textual prompts into the encoding process of the model.A visual complementary prompter merges visual prompt knowl-edge with visual information features to guide accurate localization.The textual complementary prompter aligns vis-ual information with textual prompt knowledge and textual information,guiding textual information towards a more accurate inference of the answer.Additionally,a multiple iterative fusion strategy was adopted for comprehensive answer reasoning,to ensure high-quality generation of textual and grounded answers.The experimental results vali-date the effectiveness of the model,demonstrating its superiority over existing methods on the EndoVis-18 and End-oVis-17 datasets. 展开更多
关键词 prompt learning Visual prompt Textual prompt Grounding-answering Visual question answering
在线阅读 下载PDF
Pedagogical Alignment of Large Language Models (LLM) for Personalized Learning: A Survey, Trends and Challenges 被引量:1
6
作者 Mahefa Abel Razafinirina William Germain Dimbisoa Thomas Mahatody 《Journal of Intelligent Learning Systems and Applications》 2024年第4期448-480,共33页
This survey paper investigates how personalized learning offered by Large Language Models (LLMs) could transform educational experiences. We explore Knowledge Editing Techniques (KME), which guarantee that LLMs mainta... This survey paper investigates how personalized learning offered by Large Language Models (LLMs) could transform educational experiences. We explore Knowledge Editing Techniques (KME), which guarantee that LLMs maintain current knowledge and are essential for providing accurate and up-to-date information. The datasets analyzed in this article are intended to evaluate LLM performance on educational tasks, such as error correction and question answering. We acknowledge the limitations of LLMs while highlighting their fundamental educational capabilities in writing, math, programming, and reasoning. We also explore two promising system architectures: a Mixture-of-Experts (MoE) framework and a unified LLM approach, for LLM-based education. The MoE approach makes use of specialized LLMs under the direction of a central controller for various subjects. We also discuss the use of LLMs for individualized feedback and their possibility in content creation, including the creation of videos, quizzes, and plans. In our final section, we discuss the difficulties and potential solutions for incorporating LLMs into educational systems, highlighting the importance of factual accuracy, reducing bias, and fostering critical thinking abilities. The purpose of this survey is to show the promise of LLMs as well as the issues that still need to be resolved in order to facilitate their responsible and successful integration into the educational ecosystem. 展开更多
关键词 Chain of Thought Education IA LLM Machine learning NLP Personalized learning prompt Optimization Video Generation
在线阅读 下载PDF
基于关键词扩展与Prompt-BERT-RCNN模型的医疗问答社区短文本分类 被引量:1
7
作者 臧志栋 汤祖懿 +1 位作者 秦振凯 程结晶 《情报科学》 北大核心 2025年第6期148-155,163,共9页
【目的/意义】在医疗问答社区中实现短文本的自动分类对于提高其服务效率和改善用户体验至关重要。通过构建一个结合关键词扩展技术和深度学习模型的短文本分类方法,以解决短文本分类中的特征稀疏和语义不明确问题。【方法/过程】首先... 【目的/意义】在医疗问答社区中实现短文本的自动分类对于提高其服务效率和改善用户体验至关重要。通过构建一个结合关键词扩展技术和深度学习模型的短文本分类方法,以解决短文本分类中的特征稀疏和语义不明确问题。【方法/过程】首先运用网络爬虫获取医疗问答社区“寻医问药网”的用户问题短文本;然后利用TF-IWF加权关键词重要性,并通过FastText计算关键词相似度来扩展短文本特征;接着将提示学习与深度学习模型融合,构建Prompt-BERT-RCNN模型,实现医疗短文本的有效分类。【结果/结论】实证研究表明,关键词扩展后的分类效果显著高于扩展前,且Prompt-BERT-RCNN模型对扩展后的医疗短文本的分类准确率高达97.92%,并在9个不同医疗类别中均表现优异。【创新/局限】TF-IWF与FastText的短文本扩展方法弥补了Word2vec未考虑关键词稀有度和子词上下文信息方面的缺陷,Prompt-BERT-RCNN模型通过融合Prompt的引导、BERT的深层语义理解以及RCNN的区域感知和特征提取能力进一步提升了短文本的分类准确率;但模型在个别主题的准确率仍有待提升。 展开更多
关键词 医疗问答社区 关键词扩展 短文本分类 BERT-RCNN模型 提示学习
原文传递
Adversarial Prompt Detection in Large Language Models:A Classification-Driven Approach
8
作者 Ahmet Emre Ergün Aytug Onan 《Computers, Materials & Continua》 2025年第6期4855-4877,共23页
Large Language Models(LLMs)have significantly advanced human-computer interaction by improving natural language understanding and generation.However,their vulnerability to adversarial prompts–carefully designed input... Large Language Models(LLMs)have significantly advanced human-computer interaction by improving natural language understanding and generation.However,their vulnerability to adversarial prompts–carefully designed inputs that manipulate model outputs–presents substantial challenges.This paper introduces a classification-based approach to detect adversarial prompts by utilizing both prompt features and prompt response features.Elevenmachine learning models were evaluated based on key metrics such as accuracy,precision,recall,and F1-score.The results show that the Convolutional Neural Network–Long Short-Term Memory(CNN-LSTM)cascade model delivers the best performance,especially when using prompt features,achieving an accuracy of over 97%in all adversarial scenarios.Furthermore,the Support Vector Machine(SVM)model performed best with prompt response features,particularly excelling in prompt type classification tasks.Classification results revealed that certain types of adversarial attacks,such as“Word Level”and“Adversarial Prefix”,were particularly difficult to detect,as indicated by their low recall and F1-scores.These findings suggest that more subtle manipulations can evade detection mechanisms.In contrast,attacks like“Sentence Level”and“Adversarial Insertion”were easier to identify,due to the model’s effectiveness in recognizing inserted content.Natural Language Processing(NLP)techniques played a critical role by enabling the extraction of semantic and syntactic features from both prompts and their corresponding responses.These insights highlight the importance of combining traditional and deep learning approaches,along with advanced NLP techniques,to build more reliable adversarial prompt detection systems for LLMs. 展开更多
关键词 LLM CLASSIFICATION NLP adversarial prompt machine learning deep learning
在线阅读 下载PDF
Select-and-Answer Prompting:Facilitating LLMs for Improving Zero-Shot Reasoning
9
作者 WANG Yufang TANG Xuesong HAO Kuangrong 《Journal of Donghua University(English Edition)》 2025年第5期513-522,共10页
Large language models(LLMs)have demonstrated remarkable generalization abilities across multiple tasks in natural language processing(NLP).For multi-step reasoning tasks,chain-of-thought(CoT)prompting facilitates step... Large language models(LLMs)have demonstrated remarkable generalization abilities across multiple tasks in natural language processing(NLP).For multi-step reasoning tasks,chain-of-thought(CoT)prompting facilitates step-by-step thinking,leading to improved performance.However,despite significant advancements in LLMs,current CoT prompting performs suboptimally on smaller-scale models that have fewer parameters.Additionally,the common paradigm of few-shot CoT prompting relies on a set of manual demonstrations,with performance contingent on the quality of these annotations and varying with task-specific requirements.To address these limitations,we propose a select-and-answer prompting method(SAP)to enhance language model performance on reasoning tasks without the need for manual demonstrations.This method comprises two primary steps:guiding the model to conduct preliminary analysis and generate several candidate answers based on the prompting;allowing the model to provide final answers derived from these candidate answers.The proposed prompting strategy is evaluated across two language models of varying sizes and six datasets.On ChatGLM-6B,SAP consistently outperforms few-shot CoT across all datasets.For GPT-3.5,SAP achieves comparable performance to few-shot CoT and outperforms zero-shot CoT in most cases.These experimental results indicate that SAP can significantly improve the accuracy of language models in reasoning tasks. 展开更多
关键词 zero-shot learning large language model(LLM) reasoning problem chain-of-thought(CoT)prompting
在线阅读 下载PDF
基于RoBERTa-Prompt-R-Drop新闻主题分类
10
作者 郭伟翰 《计算机时代》 2025年第12期44-49,共6页
针对新闻主题文本分类面临的上下文缺失与数据稀疏挑战,本文提出一种融合RoBERTa、提示学习与R-Drop的联合优化框架。该框架利用提示学习将分类转化为掩码语言模型任务,激活RoBERTa的预训练语义知识以弥补上下文信息缺失;同时,采用R-Dro... 针对新闻主题文本分类面临的上下文缺失与数据稀疏挑战,本文提出一种融合RoBERTa、提示学习与R-Drop的联合优化框架。该框架利用提示学习将分类转化为掩码语言模型任务,激活RoBERTa的预训练语义知识以弥补上下文信息缺失;同时,采用R-Drop对同一输入的两次Dropout随机失活输出施加KL散度约束,实现无负样本的对比正则化,迫使模型学习对噪声不敏感的鲁棒表征,从而规避低质量负样本构造带来的问题。THUCNews数据集上的实验结果表明,本方法取得了96.61%的准确率,显著优于各基线模型,充分验证了该策略在提升分类精度与模型鲁棒性方面的有效性。 展开更多
关键词 文本分类 新闻主题 RoBERTa 提示学习 R-Drop
在线阅读 下载PDF
同质性增强的异构图提示学习方法
11
作者 魏楚元 刘舜尧 +4 位作者 卓胜达 张蕾 王昌栋 黄书强 刘杰 《小型微型计算机系统》 北大核心 2026年第1期97-105,共9页
图神经网络在多个不同领域展现出巨大潜力,然而传统的图神经网络方法通常依赖大量标注数据进行训练,而在实际应用中,标注大量数据往往代价高昂且费时费力.近年来,提示学习作为一种新兴的预训练模型范式,在Few-shot、Zero-shot等低资源... 图神经网络在多个不同领域展现出巨大潜力,然而传统的图神经网络方法通常依赖大量标注数据进行训练,而在实际应用中,标注大量数据往往代价高昂且费时费力.近年来,提示学习作为一种新兴的预训练模型范式,在Few-shot、Zero-shot等低资源场景中表现出色.图提示学习是一种新颖的图预训练和提示框架,能够通过少量标注数据实现图数据的多任务处理,有效弥合预训练任务与下游任务之间的差距.然而,现有图提示学习方法在处理异构图时,忽视了图数据复杂的内在结构,特别是未能充分挖掘异构图中蕴含的同质性特征.为了解决该问题,本文提出了一种同质性增强的异构图提示学习方法,旨在提升图神经网络在异构图中的表现.具体而言,设计了基于元路径的同质子图提取方法,并结合同质性软聚类技术,有效捕捉节点间的节点相似性关系,从而优化图提示效果.实验结果表明,所提出的方法在多个基准数据集上优于现有技术,表现出更强的性能和效果. 展开更多
关键词 图神经网络 图提示学习 异构图 同质性 元路径
在线阅读 下载PDF
融合主题和实体嵌入的双向提示调优事件论元抽取
12
作者 陈千 成凯璇 +3 位作者 郭鑫 张晓霞 王素格 李艳红 《计算机科学》 北大核心 2026年第1期278-284,共7页
近年来,提示学习在自然语言处理领域得到了广泛应用。据调研,论元角色与文本中的主题往往有高度的语义相关性,且现有的提示调优方法忽略了实体信息和论元之间的交互。为此,提出一种融合主题和实体嵌入的双向提示调优事件论元抽取模型(TE... 近年来,提示学习在自然语言处理领域得到了广泛应用。据调研,论元角色与文本中的主题往往有高度的语义相关性,且现有的提示调优方法忽略了实体信息和论元之间的交互。为此,提出一种融合主题和实体嵌入的双向提示调优事件论元抽取模型(TEPEAE)。首先,使用主题模型提取主题特征并进行主题嵌入化表示;其次,基于触发词、论元和实体信息构建提示模板,并将主题嵌入融入模板;然后,利用掩码语言模型预测每个实体的角色标签;最后,将标签从标签词空间映射到论元角色空间。在ACE2005-EN和ERE-EN数据集上的实验结果表明,TEPEAE优于基线模型,F1值分别达到79.53%和78.60%,验证了TEPEAE的有效性。此外,其在低资源场景下依然展现出卓越的性能,进一步证明其具有更强的鲁棒性。 展开更多
关键词 提示学习 事件论元抽取 实体嵌入 主题嵌入 注意力机制
在线阅读 下载PDF
基于黄炎培“学生中心”教学观的《纸制品营销》课程Prompt教学模式的构建与实践
13
作者 袁静薇 《纸和造纸》 2025年第3期31-36,共6页
大模型时代,如何应用Prompt技术赋能个性化学习成为关键问题。本研究基于黄炎培“学生中心”教学观,围绕“个性之发展、谋生之准备、个人服务社会之准备、国家与社会增进生产力之准备”目标,针对《纸制品营销》教学的“四阶”Prompt教... 大模型时代,如何应用Prompt技术赋能个性化学习成为关键问题。本研究基于黄炎培“学生中心”教学观,围绕“个性之发展、谋生之准备、个人服务社会之准备、国家与社会增进生产力之准备”目标,针对《纸制品营销》教学的“四阶”Prompt教学模式研究,设计了Prompt参考框架,开发了四轮人机对话机制,解决了学生学习需求表达不清、批判性思维欠缺等问题。教学实践表明,教学模式有效提升了学生的学习成效和参与度,为《纸制品营销》课程开展个性化教学提供了可借鉴的实践范式。 展开更多
关键词 prompt技术 纸制品营销 黄炎培“学生中心”教学观 个性化学习
原文传递
基于提示调优与对比学习的中文医疗命名实体识别方法
14
作者 郑国风 刘纳 +2 位作者 李晨 杨杰 道路 《计算机工程与应用》 北大核心 2026年第1期231-242,共12页
在非结构化的医疗文本数据中精准地提取出医疗实体对于推动医学信息化进程具有重要的研究意义。利用小样本场景中的提示学习方法可以弥补预训练和微调阶段存在的差距,提升模型的泛化能力和适应性。然而,现有基于提示学习的命名实体识别... 在非结构化的医疗文本数据中精准地提取出医疗实体对于推动医学信息化进程具有重要的研究意义。利用小样本场景中的提示学习方法可以弥补预训练和微调阶段存在的差距,提升模型的泛化能力和适应性。然而,现有基于提示学习的命名实体识别研究通常严重依赖于设计的提示模板。人工设计的提示模板稳定性较差,而自动构建的离散提示模板采用固定的词向量,无法理解复杂语义。针对上述挑战,提出了一种离散提示与连续提示相结合的混合提示模板,两者协同帮助语言模型理解医疗文本中的深层次语义信息;设计了融合对比学习的预训练策略,引导模型拉近相关实体与标签词之间的语义距离,以提升模型区分不同实体标签的能力;设计抽象概率转移矩阵,有效缓解了实体在不同语境下语义不一致的问题。经过实验证明,在CCKS2019、IMCS-V2-NER和cMedQANER三个中文医疗命名实体识别数据集中,该方法相比基线模型,F1值均获得有效提升。此外,在中文通用领域命名实体识别数据集CLUENER2020上验证了模型的泛化能力。 展开更多
关键词 命名实体识别 提示学习 对比学习 小样本学习 预训练语言模型
在线阅读 下载PDF
基于提示的小样本情感分析综述
15
作者 姜鑫 马宏伟 张展峰 《软件导刊》 2026年第1期213-220,共8页
随着多媒体平台和大规模语言模型的迅猛发展,采用基于提示的方法实现小样本情感分析,对用户需求分析和系统服务改进具有重要意义。基于提示的小样本情感分析研究,致力于在不同应用场景中合理利用预训练语言模型理解分类任务,推理情感类... 随着多媒体平台和大规模语言模型的迅猛发展,采用基于提示的方法实现小样本情感分析,对用户需求分析和系统服务改进具有重要意义。基于提示的小样本情感分析研究,致力于在不同应用场景中合理利用预训练语言模型理解分类任务,推理情感类别。首先,阐述了小样本情感分析的问题背景;其次,介绍了使用提示—微调、提示调优和上下文学习的方法步骤;再次,系统对比了近来基于提示的小样本情感分析的主流技术,归纳总结了相关语料库、预训练语言模型与提示模板;最后,对基于提示的小样本情感分析未来可能的研究方向进行展望,为句子级文本分类和提示学习相关领域的研究提供参考。 展开更多
关键词 小样本情感分析 预训练语言模型 提示—微调 提示调优 上下文学习
在线阅读 下载PDF
基于对比学习与大语言模型增强的多模态方面级情感分析模型
16
作者 余传明 蒋展 孙邹驰 《现代情报》 北大核心 2026年第2期77-90,共14页
[目的/意义]针对多模态方面级情感分析(MABSA)研究领域存在的数据稀疏、数据不平衡等问题,探索大语言模型在MABSA任务中的应用和性能效果。[方法/过程]本文提出一种基于大语言模型数据增强和HiLo注意力对比学习的多模态方面级情感分析模... [目的/意义]针对多模态方面级情感分析(MABSA)研究领域存在的数据稀疏、数据不平衡等问题,探索大语言模型在MABSA任务中的应用和性能效果。[方法/过程]本文提出一种基于大语言模型数据增强和HiLo注意力对比学习的多模态方面级情感分析模型HLCL-GLM4。该模型调用ChatGLM4-Flash进行数据增强,采用Faster R-CNN和BART词嵌入分别获取文本和图像模态特征,将图像特征通过HiLo注意力机制进行建模,并使用一种自监督的对比学习策略进行模态特征学习和融合,提升样本多样性和情感语义的丰富性。[结果/结论]实验结果表明,HLCL-GLM4在Twitter-15和Twitter-17数据集上均取得了优异的性能。具体地,相较于最优基线模型,HLCL-GLM4在Twitter-15数据集的F1值提升1.6%,在Twitter-17数据集的F1值提升0.8%。 展开更多
关键词 多模态方面级情感分析 对比学习 大语言模型 提示工程 数据增强
在线阅读 下载PDF
基于prompt和知识增强的方面级情感分析 被引量:4
17
作者 李阳 唐积强 +2 位作者 朱俊武 梁明轩 高翔 《计算机科学》 CSCD 北大核心 2023年第S01期67-73,共7页
方面级情感分析是一种新兴的细粒度情感分析任务,旨在根据给定句子和方面词判断情感极性。目前广泛使用的预训练语言模型由于训练目标和方面级情感分析的目标有差异,分析结果不好。为了缓解预训练语言模型和情感分析目标的差异,prompt... 方面级情感分析是一种新兴的细粒度情感分析任务,旨在根据给定句子和方面词判断情感极性。目前广泛使用的预训练语言模型由于训练目标和方面级情感分析的目标有差异,分析结果不好。为了缓解预训练语言模型和情感分析目标的差异,prompt被引入到方面级情感分析中,采用伪标签加方面词和意见词的方式创建prompt连续模板,并使用prompt-encoder训练伪标签使其拥有语义信息;然后,使用主题图注意力机制融合关于方面词和意见词的外部知识,根据融合外部知识的隐藏向量预测由情感词典组成的候选标签词;最后,采用求和置信度分数的方式将候选标签词的概率映射到情感极性分布空间上。实验表明,该模型在SemEval 2014任务的笔记本电脑数据集和餐厅数据集上将正确率分别提高了1.53%和3.5%。 展开更多
关键词 方面级情感分析 预训练语言模型 prompt 情感词典 知识增强 深度学习
在线阅读 下载PDF
融合多Prompt模板的零样本关系抽取模型 被引量:2
18
作者 许亮 张春 +1 位作者 张宁 田雪涛 《计算机应用》 CSCD 北大核心 2023年第12期3668-3675,共8页
Prompt范式被广泛应用于零样本的自然语言处理(NLP)任务中,但是现有基于Prompt范式的零样本关系抽取(RE)模型存在答案空间映射难构造与模板选择依赖人工的问题,无法取得较好的效果。针对这些问题,提出一种融合多Prompt模板的零样本RE模... Prompt范式被广泛应用于零样本的自然语言处理(NLP)任务中,但是现有基于Prompt范式的零样本关系抽取(RE)模型存在答案空间映射难构造与模板选择依赖人工的问题,无法取得较好的效果。针对这些问题,提出一种融合多Prompt模板的零样本RE模型。首先,将零样本RE任务定义为掩码语言模型(MLM)任务,舍弃答案空间映射的构造,将模板输出的词与关系描述文本在词向量空间中进行比较,以此判断关系类别;其次,引入待抽取关系类别的描述文本的词性作为特征,学习该特征与各个模板之间的权重;最后,利用该权重融合多个模板输出的结果,以此减少人工选取的Prompt模板引起的性能损失。在FewRel(Few-shot Relation extraction dataset)和TACRED(Text Analysis Conference Relation Extraction Dataset)这两个数据集上的实验结果显示,与目前最优的模型RelationPrompt相比,所提模型在不同数据资源设置下,F1值分别提升了1.48~19.84个百分点和15.27~15.75个百分点。可见,所提模型在零样本RE任务上取得了显著的效果提升。 展开更多
关键词 关系抽取 信息抽取 零样本学习 prompt范式 预训练语言模型
在线阅读 下载PDF
PCRec:A Multi-Interest News Recommendation Framework with Prompt-Guided Cross-View Contrastive Learning
19
作者 Yi-Qi Tong Qian-Qi Liu +4 位作者 Wei Guo Hong-Rui Niu Fu-Zhen Zhuang De-Qing Wang Jun Gao 《Journal of Computer Science & Technology》 2025年第4期1079-1093,共15页
Effective news recommendation is crucial for alleviating users’information overload.While recent prompt-based news recommendation methods have shown promising performance by reformulating the recommendation task as a... Effective news recommendation is crucial for alleviating users’information overload.While recent prompt-based news recommendation methods have shown promising performance by reformulating the recommendation task as a masked prediction problem,we note that this paradigm still faces several major limitations including inadequate multi-interest representation,limited global interaction modeling,and historical interaction truncation.To address these problems,this paper proposes PCRec,a prompt-guided cross-view contrastive learning framework for multi-interest news recommendation.PCRec first introduces feature-level prompts to overcome the input constraints inherent in text-level prompts.Moreover,a two-stage user modeling module is designed to capture users’multi-interests.Finally,to model global user-news relationships,PCRec implements a cross-view contrastive learning strategy.This approach groups similar users,enabling learning from multiple perspectives and breaking down isolated relationships among users,news categories,and news subcategories.Extensive experiments on two real-world news recommendation datasets validate the superiority of our proposed PCRec compared with various state-of-the-art baselines. 展开更多
关键词 contrastive learning multi-interest modeling news recommendation prompt learning
原文传递
ProSyno:context-free prompt learning for synonym discovery
20
作者 Song ZHANG Lei HE +7 位作者 Dong WANG Hongyun BAO Suncong ZHENG Yuqiao LIU Baihua XIAO Jiayue LI Dongyuan LU Nan ZHENG 《Frontiers of Computer Science》 2025年第6期19-27,共9页
Synonym discovery is important in a wide variety of concept-related tasks,such as entity/concept mining and industrial knowledge graph(KG)construction.It intends to determine whether two terms refer to the same concep... Synonym discovery is important in a wide variety of concept-related tasks,such as entity/concept mining and industrial knowledge graph(KG)construction.It intends to determine whether two terms refer to the same concept in semantics.Existing methods rely on contexts or KGs.However,these methods are often impractical in some cases where contexts or KGs are not available.Therefore,this paper proposes a context-free prompt learning based synonym discovery method called ProSyno,which takes the world’s largest freely available dictionary Wiktionary as a semantic source.Based on a pre-trained language model(PLM),we employ a prompt learning method to generalize to other datasets without any fine-tuning.Thus,our model is more appropriate for context-free situation and can be easily transferred to other fields.Experimental results demonstrate its superiority comparing with state-of-the-art methods. 展开更多
关键词 synonym discovery prompt learning large language model
原文传递
上一页 1 2 17 下一页 到第
使用帮助 返回顶部