Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order

Taishi Nakamura,Mayank Mishra,Simone Tedeschi, Yekun Chai, Jason T Stillerman, Felix Friedrich,Prateek Yadav,Tanmay Laud, Vu Minh Chien,Terry Yue Zhuo,Diganta Misra, Ben Bogin, Xuan-Son Vu,Marzena Karpinska, Arnav Varma Dantuluri,Wojciech Kusa, Tommaso Furlanello, Rio Yokota,Niklas Muennighoff, Suhas Pai,Tosin Adewumi,Veronika Laippala, Xiaozhe Yao, Adalberto Junior, Alpay Ariyak, Aleksandr Drozd,Jordan Clive,Kshitij Gupta, Liangyu Chen, Qi Sun, Ken Tsui, Noah Persaud, Nour Fahmy,Tianlong Chen,Mohit Bansal, Nicolo Monti, Tai Dang, Ziyang Luo, Tien-Tung Bui,Roberto Navigli, Virendra Mehta, Matthew Blumberg,Victor May, Huu Nguyen,Sampo Pyysalo

arxiv(2024)

引用 0|浏览4
暂无评分
摘要
Pretrained language models underpin several AI applications, but their high computational cost for training limits accessibility. Initiatives such as BLOOM and StarCoder aim to democratize access to pretrained models for collaborative community development. However, such existing models face challenges: limited multilingual capabilities, continual pretraining causing catastrophic forgetting, whereas pretraining from scratch is computationally expensive, and compliance with AI safety and development laws. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435 billion additional tokens, Aurora-M surpasses 2 trillion tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Aurora-M is rigorously evaluated across various tasks and languages, demonstrating robustness against catastrophic forgetting and outperforming alternatives in multilingual settings, particularly in safety evaluations. To promote responsible open-source LLM development, Aurora-M and its variants are released at https://huggingface.co/collections/aurora-m/aurora-m-models-65fdfdff62471e09812f5407 .
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要