Multitask Multilingual Model Adaptation with Featurized Low-Rank Mixtures
CoRR(2024)
Abstract
Adapting pretrained large language models (LLMs) to various downstream tasks
in tens or hundreds of human languages is computationally expensive.
Parameter-efficient fine-tuning (PEFT) significantly reduces the adaptation
cost, by tuning only a small amount of parameters. However, directly applying
PEFT methods such as LoRA (Hu et al., 2022) on diverse dataset mixtures could
lead to suboptimal performance due to limited parameter capacity and negative
interference among different datasets. In this work, we propose Featurized
Low-rank Mixtures (FLix), a novel PEFT method designed for effective multitask
multilingual tuning. FLix associates each unique dataset feature, such as the
dataset's language or task, with its own low-rank weight update parameters. By
composing feature-specific parameters for each dataset, FLix can accommodate
diverse dataset mixtures and generalize better to unseen datasets. Our
experiments show that FLix leads to significant improvements over a variety of
tasks for both supervised learning and zero-shot settings using different
training data mixtures.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined