Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer
arxiv(2024)
摘要
This paper explores cost-efficient methods to adapt pretrained Large Language
Models (LLMs) to new lower-resource languages, with a specific focus on
Estonian. Leveraging the Llama 2 model, we investigate the impact of combining
cross-lingual instruction-tuning with additional monolingual pretraining. Our
results demonstrate that even a relatively small amount of additional
monolingual pretraining followed by cross-lingual instruction-tuning
significantly enhances results on Estonian. Furthermore, we showcase
cross-lingual knowledge transfer from high-quality English instructions to
Estonian, resulting in improvements in commonsense reasoning and multi-turn
conversation capabilities. Our best model, named Llammas, represents
the first open-source instruction-following LLM for Estonian. Additionally, we
publish Alpaca-est, the first general task instruction dataset for Estonia.
These contributions mark the initial progress in the direction of developing
open-source LLMs for Estonian.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要