Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
CoRR(2024)
摘要
Recent breakthroughs in large language models (LLMs) have centered around a
handful of data-rich languages. What does it take to broaden access to
breakthroughs beyond first-class citizen languages? Our work introduces Aya, a
massively multilingual generative language model that follows instructions in
101 languages of which over 50
outperforms mT0 and BLOOMZ on the majority of tasks while covering double the
number of languages. We introduce extensive new evaluation suites that broaden
the state-of-art for multilingual eval across 99 languages – including
discriminative and generative tasks, human evaluation, and simulated win rates
that cover both held-out tasks and in-distribution performance. Furthermore, we
conduct detailed investigations on the optimal finetuning mixture composition,
data pruning, as well as the toxicity, bias, and safety of our models. We
open-source our instruction datasets and our model at
https://hf.co/CohereForAI/aya-101
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要