SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels
arxiv(2023)
摘要
Pre-trained vision transformers have strong representation benefits to
various downstream tasks. Recently, many parameter-efficient fine-tuning (PEFT)
methods have been proposed, and their experiments demonstrate that tuning only
1
scenarios. However, these methods overlook the task-specific information when
fine-tuning diverse downstream tasks. In this paper, we propose a simple yet
effective method called "Salient Channel Tuning" (SCT) to leverage the
task-specific information by forwarding the model with the task images to
select partial channels in a feature map that enables us to tune only 1/8
channels leading to significantly lower parameter costs. Experiments outperform
full fine-tuning on 18 out of 19 tasks in the VTAB-1K benchmark by adding only
0.11M parameters of the ViT-B, which is 780x fewer than its full fine-tuning
counterpart. Furthermore, experiments on domain generalization and few-shot
learning surpass other PEFT methods with lower parameter costs, demonstrating
our proposed tuning technique's strong capability and effectiveness in the
low-data regime.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要