Robustness of classifier to adversarial examples under imbalanced data

2022 7th International Conference on Computer and Communication Systems (ICCCS)(2022)

引用 0|浏览4
暂无评分
摘要
Adversarial examples (AE) are used to fool classifier recently, which poses great challenges for classifier design. Therefore, it is theoretically crucial to evaluate the robustness of classifier to AE for a better classifier design. In this paper, we provide a theoretical framework to analyze the robustness of classifier to AE under imbalanced dataset from the perspective of AUC (Area under the ROC curve), and derive an interpretable upper bound. Specifically, we illustrate the obtained upper bound of linear classifier, which indicates that the upper bound depends on the difficulty of the classification task and the risk of the classifier. Experimental results on MNIST and CIFAR-10 datasets show that the classifiers designed with pairwise surrogate losses of AUC are not robust to adversarial attack. The nonlinear classifier has a higher robustness to AE compared to the linear one, which indicates that more flexible classifier can be used to improve adversarial robustness.
更多
查看译文
关键词
artificial intelligence,machine learning,adversarial examples,robustness,AUC
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要