Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models

Avi Singh,John D. Co-Reyes,Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia,Peter J. Liu,James Harrison,Jaehoon Lee,Kelvin Xu, Aaron Parisi,Abhishek Kumar,Alex Alemi, Alex Rizkowsky, Azade Nova,Ben Adlam, Bernd Bohnet, Gamaleldin Elsayed,Hanie Sedghi,Igor Mordatch,Isabelle Simpson,Izzeddin Gur,Jasper Snoek,Jeffrey Pennington,Jiri Hron,Kathleen Kenealy,Kevin Swersky, Kshiteej Mahajan, Laura Culp, Lechao Xiao, Maxwell L. Bileschi, Noah Constant,Roman Novak, Rosanne Liu, Tris Warkentin,Yundi Qian, Yamini Bansal,Ethan Dyer,Behnam Neyshabur,Jascha Sohl-Dickstein,Noah Fiedel

arxiv(2023)

引用 0|浏览34
暂无评分
摘要
Fine-tuning language models (LMs) on human-generated data remains a prevalent practice. However, the performance of such models is often limited by the quantity and diversity of high-quality human data. In this paper, we explore whether we can go beyond human data on tasks where we have access to scalar feedback, for example, on math problems where one can verify correctness. To do so, we investigate a simple self-training method based on expectation-maximization, which we call ReST^EM, where we (1) generate samples from the model and filter them using binary feedback, (2) fine-tune the model on these samples, and (3) repeat this process a few times. Testing on advanced MATH reasoning and APPS coding benchmarks using PaLM-2 models, we find that ReST^EM scales favorably with model size and significantly surpasses fine-tuning only on human data. Overall, our findings suggest self-training with feedback can substantially reduce dependence on human-generated data.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要