Mistral 7B

Albert Q. Jiang,Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot,Diego de las Casas, Florian Bressand, Gianna Lengyel,Guillaume Lample,Lucile Saulnier, Lélio Renard Lavaud,Marie-Anne Lachaux, Pierre Stock,Teven Le Scao,Thibaut Lavril, Thomas Wang,Timothée Lacroix, William El Sayed

CoRR(2023)

引用 0|浏览75
暂无评分
摘要
We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses the Llama 2 13B -- Chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要