Adversarial examples in the physical world.

international conference on learning representations(2017)

引用 6077|浏览1052
暂无评分
摘要
Most existing machine learning classifiers are highly vulnerable to adversarial examples.An adversarial example is a sample of input data which has been modifiedvery slightly in a way that is intended to cause a machine learning classifierto misclassify it.In many cases, these modifications can be so subtle that a human observer doesnot even notice the modification at all, yet the classifier still makes a mistake.Adversarial examples pose security concernsbecause they could be used to perform an attack on machine learning systems, even if the adversary has noaccess to the underlying model.Up to now, all previous work has assumed a threat model in which the adversary canfeed data directly into the machine learning classifier.This is not always the case for systems operating in the physical world,for example those which are using signals from cameras and other sensors as input.This paper shows that even in such physical world scenarios, machine learning systems are vulnerableto adversarial examples.We demonstrate this by feeding adversarial images obtained from a cell-phone camerato an ImageNet Inception classifier and measuring the classification accuracy of the system.We find that a large fraction of adversarial examples are classified incorrectlyeven when perceived through the camera.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要