Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
ICML(2020)
摘要
Machine learning algorithms are known to be susceptible to data poisoning
attacks, where an adversary manipulates the training data to degrade
performance of the resulting classifier. In this work, we present a unifying
view of randomized smoothing over arbitrary functions, and we leverage this
novel characterization to propose a new strategy for building classifiers that
are pointwise-certifiably robust to general data poisoning attacks. As a
specific instantiation, we utilize our framework to build linear classifiers
that are robust to a strong variant of label flipping, where each test example
is targeted independently. In other words, for each test point, our classifier
includes a certification that its prediction would be the same had some number
of training labels been changed adversarially. Randomized smoothing has
previously been used to guarantee—with high probability—test-time
robustness to adversarial manipulation of the input to a classifier; we derive
a variant which provides a deterministic, analytical bound, sidestepping the
probabilistic certificates that traditionally result from the sampling
subprocedure. Further, we obtain these certified bounds with minimal additional
runtime complexity over standard classification and no assumptions on the train
or test distributions. We generalize our results to the multi-class case,
providing the first multi-class classification algorithm that is certifiably
robust to label-flipping attacks.
更多查看译文
关键词
randomized smoothing,robustness,label-flipping
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要