ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent

Renat Aksitov, Sobhan Miryoosefi,Zonglin Li,Daliang Li, Sheila Babayan, Kavya Kopparapu, Zachary Fisher,Ruiqi Guo, Sushant Prakash, Pranesh Srinivasan,Manzil Zaheer, Felix Yu,Sanjiv Kumar

CoRR(2023)

引用 0|浏览30
暂无评分
摘要
Answering complex natural language questions often necessitates multi-step reasoning and integrating external information. Several systems have combined knowledge retrieval with a large language model (LLM) to answer such questions. These systems, however, suffer from various failure cases, and we cannot directly train them end-to-end to fix such failures, as interaction with external knowledge is non-differentiable. To address these deficiencies, we define a ReAct-style LLM agent with the ability to reason and act upon external knowledge. We further refine the agent through a ReST-like method that iteratively trains on previous trajectories, employing growing-batch reinforcement learning with AI feedback for continuous self-improvement and self-distillation. Starting from a prompted large model and after just two iterations of the algorithm, we can produce a fine-tuned small model that achieves comparable performance on challenging compositional question-answering benchmarks with two orders of magnitude fewer parameters.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要