VMT-Adapter: Parameter-Efficient Transfer Learning for Multi-Task Dense
arxiv(2023)
摘要
Large-scale pre-trained models have achieved remarkable success in various
computer vision tasks. A standard approach to leverage these models is to
fine-tune all model parameters for downstream tasks, which poses challenges in
terms of computational and storage costs. Recently, inspired by Natural
Language Processing (NLP), parameter-efficient transfer learning has been
successfully applied to vision tasks. However, most existing techniques
primarily focus on single-task adaptation, and despite limited research on
multi-task adaptation, these methods often exhibit suboptimal training and
inference efficiency. In this paper, we first propose an once-for-all Vision
Multi-Task Adapter (VMT-Adapter), which strikes approximately O(1) training and
inference efficiency w.r.t task number. Concretely, VMT-Adapter shares the
knowledge from multiple tasks to enhance cross-task interaction while preserves
task-specific knowledge via independent knowledge extraction modules. Notably,
since task-specific modules require few parameters, VMT-Adapter can handle an
arbitrary number of tasks with a negligible increase of trainable parameters.
We also propose VMT-Adapter-Lite, which further reduces the trainable
parameters by learning shared parameters between down- and up-projections.
Extensive experiments on four dense scene understanding tasks demonstrate the
superiority of VMT-Adapter(-Lite), achieving a 3.96%(1.34%) relative
improvement compared to single-task full fine-tuning, while utilizing merely
~1% (0.36%) trainable parameters of the pre-trained model.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要