Boosting Adversarial Robustness Distillation Via Hybrid Decomposed Knowledge

Yulun Wu,Mingrui Lao,Yanming Guo, Dongmei Chen, Tianyuan Yua

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览2
暂无评分
摘要
Adversarial Robust Distillation (ARD) has emerged as a potent defense mechanism tailored to small models against adversarial threats. However, mainstream ARD methods typically exploit teachers’ response as the transferred knowledge, while neglecting the analysis of involved target-related knowledge to mitigate adversarial attacks. Furthermore, these methods primarily focus on logits-level distillation, which overlook the features-level knowledge in teacher models. In this paper, we introduce a novel Hybrid Decomposed Distillation (HDD) approach, which attempts to identify the vital knowledge against adversarial threats through dual-level distillation. Specifically, we first seek to separate the predictions of teacher model into target-related and target-unrelated knowledge for flexible yet efficient logits-level distillation. Besides, to further boost the distillation efficacy, HDD leverages the channel correlations to decompose intermediate features into highly and less relevant components. Extensive experiments on two benchmarks demonstrate that our HDD achieves superior performance in both clean accuracy and robustness, in contrast to current state-of-the-art methods.
更多
查看译文
关键词
Adversarial robustness distillation,knowledge decomposition,dual-level distillation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要