Deep Learning Robustness Verification for Few-Pixel A similar to acks

PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL(2023)

引用 0|浏览19
暂无评分
摘要
While successful, neural networks have been shown to be vulnerable to adversarial example attacks. In !0 adversarial attacks, also known as few-pixel attacks, the attacker picks C pixels from the image and arbitrarily perturbs them. To understand the robustness level of a network to these attacks, it is required to check the robustness of the network to perturbations of every set of C pixels. Since the number of sets is exponentially large, existing robustness veriers, which can reason about a single set of pixels at a time, are impractical for !0 robustness verication. We introduce Calzone, an !0 robustness verier for neural networks. To the best of our knowledge, Calzone is therst to provide a sound and complete analysis for !0 adversarial attacks. Calzone builds on the following observation: if a classier is robust to any perturbation of a set of : pixels, for : > C, then it is robust to any perturbation of its subsets of size C. Thus, to reduce the verication time, Calzone predicts the largest : that can be proven robust, via dynamic programming and sampling. It then relies on covering designs to compute a covering of the image with sets of size :. For each set in the covering, Calzone submits its corresponding box neighborhood to an existing ! 8 robustness veri similar to er. If a set's neighborhood is not robust, Calzone repeats this process and covers this set with sets of size : ' < :. We evaluate Calzone on several datasets and networks, for C = 5. Typically, Calzone veri similar to es !0 robustness within few minutes. On our most challenging instances (e.g., C = 5), Calzone completes within few hours. We compare to a MILP baseline and show that it does not scale already for C = 3 [GRRAPHICS]
更多
查看译文
关键词
Neural network veri similar to cation, L-0 adversarial example attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要