Continual Forgetting for Pre-trained Vision Models
CVPR 2024(2024)
摘要
For privacy and security concerns, the need to erase unwanted information
from pre-trained vision models is becoming evident nowadays. In real-world
scenarios, erasure requests originate at any time from both users and model
owners. These requests usually form a sequence. Therefore, under such a
setting, selective information is expected to be continuously removed from a
pre-trained model while maintaining the rest. We define this problem as
continual forgetting and identify two key challenges. (i) For unwanted
knowledge, efficient and effective deleting is crucial. (ii) For remaining
knowledge, the impact brought by the forgetting procedure should be minimal. To
address them, we propose Group Sparse LoRA (GS-LoRA). Specifically, towards
(i), we use LoRA modules to fine-tune the FFN layers in Transformer blocks for
each forgetting task independently, and towards (ii), a simple group sparse
regularization is adopted, enabling automatic selection of specific LoRA groups
and zeroing out the others. GS-LoRA is effective, parameter-efficient,
data-efficient, and easy to implement. We conduct extensive experiments on face
recognition, object detection and image classification and demonstrate that
GS-LoRA manages to forget specific classes with minimal impact on other
classes. Codes will be released on .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要