Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean
arxiv(2024)
摘要
Large language models (LLMs) use pretraining to predict the subsequent word;
however, their expansion requires significant computing resources. Numerous big
tech companies and research institutes have developed multilingual LLMs (MLLMs)
to meet current demands, overlooking less-resourced languages (LRLs). This
study proposed three strategies to enhance the performance of LRLs based on the
publicly available MLLMs. First, the MLLM vocabularies of LRLs were expanded to
enhance expressiveness. Second, bilingual data were used for pretraining to
align the high- and less-resourced languages. Third, a high-quality small-scale
instruction dataset was constructed and instruction-tuning was performed to
augment the LRL. The experiments employed the Llama2 model and Korean was used
as the LRL, which was quantitatively evaluated against other developed LLMs
across eight tasks. Furthermore, a qualitative assessment was performed based
on human evaluation and GPT4. Experimental results showed that our proposed
Bllossom model exhibited superior performance in qualitative analyses compared
to previously proposed Korean monolingual models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要