Comparing Complexities of Decision Boundaries for Robust Training: A Universal Approach.

ACCV (6)(2022)

引用 0|浏览11
暂无评分
摘要
We investigate the geometric complexity of decision boundaries for robust training compared to standard training. By considering the local geometry of nearest neighbour sets, we study them in a model-agnostic way and theoretically derive a lower-bound $$R^*\in \mathbb {R}$$ on the perturbation magnitude $$\delta \in \mathbb {R}$$ for which robust training provably requires a geometrically more complex decision boundary than accurate training. We show that state-of-the-art robust models learn more complex decision boundaries than their non-robust counterparts, confirming previous hypotheses. Then, we compute $$R^*$$ for common image benchmarks and find that it also empirically serves as an upper bound over which label noise is introduced. We demonstrate for deep neural network classifiers that perturbation magnitudes $$\delta \ge R^*$$ lead to reduced robustness and generalization performance. Therefore, $$R^*$$ bounds the maximum feasible perturbation magnitude for norm-bounded robust training and data augmentation. Finally, we show that $$R^*< 0.5R$$ for common benchmarks, where R is a distribution’s minimum nearest neighbour distance. Thus, we improve previous work on determining a distribution’s maximum robust radius.
更多
查看译文
关键词
robust training,decision boundaries,universal approach
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要