MULTIFLOW: Shifting Towards Task-Agnostic Vision-Language Pruning
CVPR 2024(2024)
摘要
While excellent in transfer learning, Vision-Language models (VLMs) come with
high computational costs due to their large number of parameters. To address
this issue, removing parameters via model pruning is a viable solution.
However, existing techniques for VLMs are task-specific, and thus require
pruning the network from scratch for each new task of interest. In this work,
we explore a new direction: Task-Agnostic Vision-Language Pruning (TA-VLP).
Given a pretrained VLM, the goal is to find a unique pruned counterpart
transferable to multiple unknown downstream tasks. In this challenging setting,
the transferable representations already encoded in the pretrained model are a
key aspect to preserve. Thus, we propose Multimodal Flow Pruning (MULTIFLOW), a
first, gradient-free, pruning framework for TA-VLP where: (i) the importance of
a parameter is expressed in terms of its magnitude and its information flow, by
incorporating the saliency of the neurons it connects; and (ii) pruning is
driven by the emergent (multimodal) distribution of the VLM parameters after
pretraining. We benchmark eight state-of-the-art pruning algorithms in the
context of TA-VLP, experimenting with two VLMs, three vision-language tasks,
and three pruning ratios. Our experimental results show that MULTIFLOW
outperforms recent sophisticated, combinatorial competitors in the vast
majority of the cases, paving the way towards addressing TA-VLP. The code is
publicly available at https://github.com/FarinaMatteo/multiflow.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要