Making Language Models Better Tool Learners with Execution Feedback
CoRR(2023)
摘要
Tools serve as pivotal interfaces that enable humans to understand and
reshape the environment. With the advent of foundation models, AI systems can
utilize tools to expand their capabilities and interact with the real world.
Existing tool learning methodologies, encompassing supervised fine-tuning and
prompt engineering approaches, often induce large language models to utilize
tools indiscriminately, as complex tasks often exceed their own competencies.
However, introducing tools for simple tasks, which the models themselves can
readily resolve, can inadvertently propagate errors rather than enhance
performance. This leads to the research question: can we teach language models
when and how to use tools? To meet this need, we propose Tool leaRning wIth
exeCution fEedback (TRICE), a two-stage end-to-end framework that enables the
model to continually learn through feedback derived from tool execution,
thereby learning when and how to use tools effectively. Experimental results,
backed by further analysis, show that TRICE can make the large language model
selectively use tools by improving the accuracy of tool usage while enhancing
insufficient tool learning and mitigating excessive reliance on tools. Code and
datasets are available in https://github.com/zjunlp/trice.
更多查看译文
关键词
tool,feedback,models,language
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要