Defending Deep Neural Networks against Structural Pertubations

user-5d54d98b530c705f51c2fe5a(2019)

引用 0|浏览6
暂无评分
摘要
Deep learning has had a tremendous impact in the field of computer vision. However, the deployment of such algorithms in real-world environments rely heavily on its robustness to noise. A lot of work has been put forward in recent years to analyse and defend such models against attacks that cause a slight perturbation in the image and change the output of the network. This work focuses on testing robustness of a model against naturally occurring structural perturbations and, we propose a systematic way to defend against such attacks. This is in contrast to a few other works where complicated optimisation methods are required to generate an adversarial example. We also analyse the effect of adversarial training on the decision boundary of the model. Our work strives to ensure the safety of deep learning in multiple domains such as facial recognition, automated driving and object detection. This paper primarily focuses on image classification and, we believe that our algorithm works independently of the model architecture and dataset.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要