期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Towards efficient and effective unlearning of large language models for recommendation
1
作者 Hangyu WANG Jianghao LIN +4 位作者 Bo CHEN Yang YANG Ruiming TANG Weinan ZHANG Yong YU 《Frontiers of Computer Science》 2025年第3期119-121,共3页
1 Introduction Large Language Models(LLMs)possess massive parameters and are trained on vast datasets,demonstrating exceptional proficiency in various tasks.The remarkable advancements in LLMs also inspire the explora... 1 Introduction Large Language Models(LLMs)possess massive parameters and are trained on vast datasets,demonstrating exceptional proficiency in various tasks.The remarkable advancements in LLMs also inspire the exploration of leveraging LLMs as recommenders(LLMRec),whose effectiveness stems from extensive open-world knowledge and reasoning ability in LLMs[1].LLMRec obtains the recommendation ability through instruction tuning on the user interaction data.But in many cases,it is also crucial for LLMRec to forget specific user data,which is referred to as recommendation unlearning[2],as shown in Fig.1. 展开更多
关键词 large language models llms possess user interaction data large language models instruction tuning recommendation unlearning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部