FLOAT: Fast Learnable Once-for-All Adversarial Training for Tunable Trade-off between Accuracy and Robustness

WACV(2023)

引用 0|浏览11
暂无评分
摘要
Existing models that achieve state-of-the-art (SOTA) performance on both clean and adversarially-perturbed images rely on convolution operations conditioned with feature-wise linear modulation (FiLM) layers. These layers require additional parameters and are hyperparameter sensitive. They significantly increase training time, memory cost, and potential latency which can be costly for resource-limited or real-time applications. In this paper, we present a fast learnable once-for-all adversarial training (FLOAT) algorithm, which instead of the existing FiLMbased conditioning, presents a unique weight conditioned learning that requires no additional layer, thereby incurring no significant increase in parameter count, training time, or network latency compared to standard adversarial training. In particular, we add configurable scaled noise to the weight tensors that enables a trade-off between clean and adversarial performance. Extensive experiments show that FLOAT can yield SOTA performance improving both clean and perturbed image classification by up to similar to 6% and similar to 10%, respectively. Moreover, real hardware measurement shows that FLOAT can reduce the training time by up to 1:43x with fewer model parameters of up to 1:47x on isohyperparameter settings compared to the FiLM-based alternatives. Additionally, to further improve memory efficiency we introduce FLOAT sparse (FLOATS), a form of non-iterative model pruning and provide detailed empirical analysis in yielding a three-way accuracy-robustnesscomplexity trade-off for these new class of pruned conditionally trained models.
更多
查看译文
关键词
Algorithms: Machine learning architectures,formulations,and algorithms (including transfer),Adversarial learning,adversarial attack and defense methods,Embedded sensing/real-time techniques
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要