Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization
arxiv(2024)
摘要
Single-step adversarial training (SSAT) has demonstrated the potential to
achieve both efficiency and robustness. However, SSAT suffers from catastrophic
overfitting (CO), a phenomenon that leads to a severely distorted classifier,
making it vulnerable to multi-step adversarial attacks. In this work, we
observe that some adversarial examples generated on the SSAT-trained network
exhibit anomalous behaviour, that is, although these training samples are
generated by the inner maximization process, their associated loss decreases
instead, which we named abnormal adversarial examples (AAEs). Upon further
analysis, we discover a close relationship between AAEs and classifier
distortion, as both the number and outputs of AAEs undergo a significant
variation with the onset of CO. Given this observation, we re-examine the SSAT
process and uncover that before the occurrence of CO, the classifier already
displayed a slight distortion, indicated by the presence of few AAEs.
Furthermore, the classifier directly optimizing these AAEs will accelerate its
distortion, and correspondingly, the variation of AAEs will sharply increase as
a result. In such a vicious circle, the classifier rapidly becomes highly
distorted and manifests as CO within a few iterations. These observations
motivate us to eliminate CO by hindering the generation of AAEs. Specifically,
we design a novel method, termed Abnormal Adversarial Examples Regularization
(AAER), which explicitly regularizes the variation of AAEs to hinder the
classifier from becoming distorted. Extensive experiments demonstrate that our
method can effectively eliminate CO and further boost adversarial robustness
with negligible additional computational overhead.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要