期刊文献+

大语言模型安全与隐私风险综述 被引量:13

Survey on Security and Privacy Risks in Large Language Models
在线阅读 下载PDF
导出
摘要 近年来,大语言模型(large language model,LLM)作为深度学习网络技术的关键分支,在自然语言处理(natural language processing,NLP)领域取得了一系列突破性成就,并被广泛采用.然而,在其包括预训练、微调和实际部署在内的完整生命周期中,多种安全威胁和隐私泄露的风险相继被发现,引起了学术和工业界越来越多的关注.首先以LLM发展过程中出现的预训练-微调范式、预训练-提示学习范式和预训练-指令微调范式为线索,梳理了针对LLM的常规安全威胁,即3种对抗攻击(对抗样本攻击、后门攻击、投毒攻击)的代表性研究,接着总结了一些最新工作披露的新型安全威胁,然后介绍了LLM的隐私风险及其研究进展.相关内容有助于LLM的研究和部署者在模型设计、训练及应用过程中,识别、预防和缓解这些威胁与风险,同时实现模型性能与安全及隐私保护之间的平衡. In recent years,large language models(LLMs)have emerged as a critical branch of deep learning network technology,achieving a series of breakthrough accomplishments in the field of natural language processing(NLP),and gaining widespread adoption.However,throughout their entire lifecycle,including pre-training,fine-tuning,and actual deployment,a variety of security threats and risks of privacy breaches have been discovered,drawing increasing attention from both the academic and industrial sectors.Navigating the development of the paradigm of using large language models to handle natural language processing tasks,as known as the pre-training and fine-tuning paradigm,the pre-training and prompt learning paradigm,and the pre-training and instruction-tuning paradigm,this article outlines conventional security threats against large language models,specifically representative studies on the three types of traditional adversarial attacks(adversarial example attack,backdoor attack and poisoning attack).It then summarizes some of the novel security threats revealed by recent research,followed by a discussion on the privacy risks of large language models and the progress in their research.The content aids researchers and deployers of large language models in identifying,preventing,and mitigating these threats and risks during the model design,training,and application processes,while also achieving a balance between model performance,security,and privacy protection.
作者 姜毅 杨勇 印佳丽 刘小垒 李吉亮 王伟 田有亮 巫英才 纪守领 Jiang Yi;Yang Yong;Yin Jiali;Liu Xiaolei;Li Jiliang;Wang Wei;Tian Youliang;Wu Yingcai;and Ji Shouling(College of Computer Science and Technology,Zhejiang University,Hangzhou 310007;College of Renwu,Guizhou University,Guiyang 550025;College of Computer Science and Big Data,Fuzhou University,Fuzhou 350108;Institute of Computer Application,China Academy of Engineering Physics,Mianyang,Sichuan 621054;School of Cyber Science and Engineering,Xi’an Jiaotong University,Xi’an 710049;Beijing Key Laboratory of Security and Privacy in Intelligent Transportation(Beijing Jiaotong University),Beijing 100091;College of Computer Science and Technology,Guizhou University,Guiyang 550025)
出处 《计算机研究与发展》 北大核心 2025年第8期1979-2018,共40页 Journal of Computer Research and Development
基金 国家重点研发计划项目(2022YFB3102100) 国家自然科学基金项目(U244120033,U24A20336)。
关键词 大语言模型 预训练语言模型 安全 隐私 威胁 large language models(LLMs) pre-trained language models security privacy threat
  • 相关文献

参考文献4

二级参考文献5

共引文献90

同被引文献125

引证文献13

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部