Architectural Adversarial Robustness: The Case for Deep Pursuit

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021(2021)

引用 23|浏览27
暂无评分
摘要
Despite their unmatched performance, deep neural networks remain susceptible to targeted attacks by nearly imperceptible levels of adversarial noise. While the underlying cause of this sensitivity is not well understood, theoretical analyses can be simplified by refraining each layer of a feed forward network as an approximate solution to a sparse coding problem. Iterative solutions using basis pursuit are theoretically more stable and have improved adversarial robustness. However, cascading layer-wise pursuit implementations suffer from error accumulation in deeper networks. In contrast, our new method of deep pursuit approximates the activations of all layers as a single global optimization problem, allowing us to consider deepen real-world architectures with skip connections such as residual networks. Experimentally, our approach demonstrates improved robustness to adversarial noise.
更多
查看译文
关键词
architectural adversarial robustness,deep pursuit,unmatched performance,deep neural networks,targeted attacks,imperceptible levels,adversarial noise,feed-forward network,approximate solution,sparse coding problem,iterative solutions,layer-wise pursuit implementations,single global optimization problem,real-world architectures,residual networks,cascading layer-wise pursuit implementations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要