Structural Robustness for Deep Learning Architectures

2019 IEEE Data Science Workshop (DSW)(2019)

引用 4|浏览28
暂无评分
摘要
Deep Networks have been shown to provide state-of-the-art performance in many machine learning challenges. Unfortunately, they are susceptible to various types of noise, including adversarial attacks and corrupted inputs. In this work we introduce a formal definition of robustness which can be viewed as a localized Lipschitz constant of the network function, quantified in the domain of the data to be classified. We compare this notion of robustness to existing ones, and study its connections with methods in the literature. We evaluate this metric by performing experiments on various competitive vision datasets.
更多
查看译文
关键词
network function,localized Lipschitz constant,adversarial attacks,machine learning challenges,Deep Networks,deep learning architectures,structural robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要