期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Tool learning with large language models:a survey 被引量:1
1
作者 changle qu Sunhao DAI +5 位作者 Xiaochi WEI Hengyi CAI Shuaiqiang WANG Dawei YIN Jun XU Ji-rong WEN 《Frontiers of Computer Science》 2025年第8期63-83,共21页
Recently,tool learning with large language models(LLMs)has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.Despite growing attention and rapid advancements in ... Recently,tool learning with large language models(LLMs)has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.Despite growing attention and rapid advancements in this field,the existing literature remains fragmented and lacks systematic organization,posing barriers to entry for newcomers.This gap motivates us to conduct a comprehensive survey of existing works on tool learning with LLMs.In this survey,we focus on reviewing existing literature from the two primary aspects(1)why tool learning is beneficial and(2)how tool learning is implemented,enabling a comprehensive understanding of tool learning with LLMs.We first explore the“why”by reviewing both the benefits of tool integration and the inherent benefits of the tool learning paradigm from six specific aspects.In terms of“how”,we systematically review the literature according to a taxonomy of four key stages in the tool learning workflow:task planning,tool selection,tool calling,and response generation.Additionally,we provide a detailed summary of existing benchmarks and evaluation methods,categorizing them according to their relevance to different stages.Finally,we discuss current challenges and outline potential future directions,aiming to inspire both researchers and industrial developers to further explore this emerging and promising area. 展开更多
关键词 tool learning large language models AGENT
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部