Orca-Math: Unlocking the potential of SLMs in Grade School Math
CoRR(2024)
摘要
Mathematical word problem-solving has long been recognized as a complex task
for small language models (SLMs). A recent study hypothesized that the smallest
model size, needed to achieve over 80
billion parameters. To reach this level of performance with smaller models,
researcher often train SLMs to generate Python code or use tools to help avoid
calculation errors. Additionally, they employ ensembling, where outputs of up
to 100 model runs are combined to arrive at a more accurate result. Result
selection is done using consensus, majority vote or a separate a verifier model
used in conjunction with the SLM. Ensembling provides a substantial boost in
accuracy but at a significant cost increase with multiple calls to the model
(e.g., Phi-GSM uses top-48 to boost the performance from 68.2 to 81.5).
In this work, we present Orca-Math, a 7-billion-parameter SLM based on the
Mistral-7B, which achieves 86.81
calls or the use of verifiers, code execution or any other external tools. Our
approach has the following key elements: (1) A high quality synthetic dataset
of 200K math problems created using a multi-agent setup where agents
collaborate to create the data, (2) An iterative learning techniques that
enables the SLM to practice solving problems, receive feedback on its solutions
and learn from preference pairs incorporating the SLM solutions and the
feedback. When trained with Supervised Fine-Tuning alone, Orca-Math achieves
81.50
achieves 86.81
larger models such as LLAMA-2-70B, WizardMath-70B, Gemini-Pro, ChatGPT-3.5. It
also significantly outperforms other smaller models while using much smaller
data (hundreds of thousands vs. millions of problems).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要