Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations
CoRR(2023)
摘要
Unlearnable datasets lead to a drastic drop in the generalization performance
of models trained on them by introducing elaborate and imperceptible
perturbations into clean training sets. Many existing defenses, e.g., JPEG
compression and adversarial training, effectively counter UDs based on
norm-constrained additive noise. However, a fire-new type of convolution-based
UDs have been proposed and render existing defenses all ineffective, presenting
a greater challenge to defenders. To address this, we express the
convolution-based unlearnable sample as the result of multiplying a matrix by a
clean sample in a simplified scenario, and formalize the intra-class matrix
inconsistency as $\Theta_{imi}$, inter-class matrix consistency as
$\Theta_{imc}$ to investigate the working mechanism of the convolution-based
UDs. We conjecture that increasing both of these metrics will mitigate the
unlearnability effect. Through validation experiments that commendably support
our hypothesis, we further design a random matrix to boost both $\Theta_{imi}$
and $\Theta_{imc}$, achieving a notable degree of defense effect. Hence, by
building upon and extending these facts, we first propose a brand-new image
COrruption that employs randomly multiplicative transformation via
INterpolation operation to successfully defend against convolution-based UDs.
Our approach leverages global pixel random interpolations, effectively
suppressing the impact of multiplicative noise in convolution-based UDs.
Additionally, we have also designed two new forms of convolution-based UDs, and
find that our defense is the most effective against them.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要