Parameter-Efficient Finetuning for Robust Continual Multilingual Learning

conf_acl(2022)

引用 1|浏览57
暂无评分
摘要
We study the underexplored problem of Continual Multilingual Learning, where a multilingual model, already trained on task-specific data from all supported languages, is continually updated using batches of new multilingual training data for the same task. We show that naively updating the multilingual model can lead to losses in performance over a subset of languages although the aggregated performance metric shows an improvement. We establish this phenomenon over four tasks belonging to three task families (token-level, sentence-level and seq2seq). We then build upon recent advances in parameter-efficient finetuning to develop novel finetuning strategies that allow us to jointly minimize language-specific forgetting while encouraging positive cross-lingual transfer observed in this setup. Our proposed pipeline, LAFT-URIEL, improves the spread of gains over the supported languages while reducing the magnitude of language-specific losses incurred.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要