ContRE: A Complementary Measure for Robustness Evaluation of Deep Networks via Contrastive Examples

23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023(2023)

引用 0|浏览3
暂无评分
摘要
Training images with data transformations, e.g., crops, shifts, rotations and color distortions, have been suggested as contrastive examples to evaluate the robustness of deep neural networks against data noises [1]. In this work, we propose a practical framework ContRE (which is the meaning of "against" in French) that uses Contrastive examples for DNN Robustness Estimation. Specifically, ContRE follows the assumption in [2], [3] that robust DNN models with good generalization performance are capable of extracting a consistent set of features and making consistent predictions from the same image under varying data transformations. Incorporating with a set of randomized strategies for well-designed data transformations over the training set, ContRE adopts classification errors and Fisher ratios on the generated contrastive examples to assess and analyze the robustness of DNN models, which correlates to the models' generalization performance. To show the effectiveness and efficiency of ContRE, extensive experiments have been done using various DNN models, e.g., ResNet, VGGNet, DenseNet, EfficientNet, etc., on three open source benchmark datasets, i.e., CIFAR-10, CIFAR-100, and ImageNet, with thorough ablation studies and applicability analyses. Our experiment results confirm that (1) behaviors of deep models on contrastive examples are strongly correlated to what on the testing set, and (2) the robustness that ContRE calculates is a robust measure of generalization performance complementing to the testing set in various settings. Codes is to be publicly available.
更多
查看译文
关键词
Contrastive Examples,Robustness,Generalization Performance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要