期刊文献+

结合大语言模型与动态提示的裁判文书摘要方法

Judgment document summarization method combining large language model and dynamic prompts
在线阅读 下载PDF
导出
摘要 针对裁判文书案件结构复杂、涉案事实冗余且案情分布广泛的问题,现有的大语言模型(LLM)难以有效关注结构信息并可能会产生事实错误关联,从而导致结构信息缺失和事实不一致。因此,提出一种结合LLM与动态提示的裁判文书摘要方法DPCM(Dynamic Prompt Correction Method)。首先,利用LLM进行单样本学习,以生成裁判文书摘要。其次,计算原文与摘要之间的高维相似性,以检测摘要中可能存在的结构缺失或事实不一致的问题:如果发现问题,将错误摘要与原文拼接,并加入提示词,随后再次进行单样本学习,以修正并生成新的摘要,且再次进行相似性检测,如果问题仍然存在,则重复此生成与检测过程。最后,通过这种反复迭代的方式动态调整提示词,以逐步优化生成的摘要。在CAIL2020公共司法摘要数据集上的实验结果表明,相较于Least-To-Most-Prompting、Zero-Shot-Reasoners和Self_Consistency_Cot等方法,所提方法在Rouge-1、Rouge-2、Rouge-L、BERTscore、FactCC(Factual Consistency)指标上均有所提高。 In view of the problems of complex case structure,redundant facts involved in cases,and wide distribution of cases in judgment documents,the existing Large Language Models(LLMs)are difficult to focus on structural information effectively and may generate factual errors,resulting in missing structural information and factual inconsistency.To this end,a judgment document summary method combining LLMs and dynamic prompts,named DPCM(Dynamic Prompt Correction Method),was proposed.Firstly,LLMs were used for single-sample learning to generate a judgment document summary.Secondly,high-dimensional similarity between the original text and the summary was calculated to detect possible missing structure or factual inconsistency problems in the summary.If a problem was found,the wrong summary was spliced with the original text,and the prompt words were added.Then,one-shot learning was performed again to correct and generate a new summary,and a similarity test was performed again.If the problem still existed,the generation and detection process would be repeated.Finally,through this iterative method,the prompt words were adjusted dynamically to optimize the generated summary gradually.Experimental results on the CAIL2020 public justice summary dataset show that compared with Least-To-Most Prompting,Zero-Shot Reasoners,Self_Consistency_Cot and other methods,the proposed method has improvements in Rouge-1,Rouge-2,Rouge-L,BERTscore,FactCC(Factual Consistency)indicators.
作者 张滨滨 秦永彬 黄瑞章 陈艳平 ZHANG Binbin;QIN Yongbin;HUANG Ruizhang;CHEN Yanping(College of Computer Science and Technology,Guizhou University,Guiyang Guizhou 550025,China;State Key Laboratory of Public Big Data(Guizhou University),Guiyang Guizhou 550025,China;Text Computing and Cognitive Intelligence Engineering Research Center of National Education Ministry,(Guizhou University),Guiyang Guizhou 550025,China)
出处 《计算机应用》 北大核心 2025年第9期2783-2789,共7页 journal of Computer Applications
基金 国家重点研发计划项目(2023YFC3304500) 贵州省科学技术基金重点资助项目([2024]003)。
关键词 大语言模型 动态提示 裁判文书摘要 结构缺失 事实不一致 Large Language Model(LLM) dynamic prompt judgment document summary missing structure factual inconsistency
  • 相关文献

参考文献6

二级参考文献31

共引文献26

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部