Large Convolutional Model Tuning via Filter Subspace
arxiv(2024)
摘要
Efficient fine-tuning methods are critical to address the high computational
and parameter complexity while adapting large pre-trained models to downstream
tasks. Our study is inspired by prior research that represents each convolution
filter as a linear combination of a small set of filter subspace elements,
referred to as filter atoms. In this paper, we propose to fine-tune pre-trained
models by adjusting only filter atoms, which are responsible for spatial-only
convolution, while preserving spatially-invariant channel combination knowledge
in atom coefficients. In this way, we bring a new filter subspace view for
model tuning. Furthermore, each filter atom can be recursively decomposed as a
combination of another set of atoms, which naturally expands the number of
tunable parameters in the filter subspace. By only adapting filter atoms
constructed by a small number of parameters, while maintaining the rest of
model parameters constant, the proposed approach is highly parameter-efficient.
It effectively preserves the capabilities of pre-trained models and prevents
overfitting to downstream tasks. Extensive experiments show that such a simple
scheme surpasses previous tuning baselines for both discriminate and generative
tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要