Output Range Analysis for Feed-Forward Deep Neural Networks via Linear Programming

IEEE Transactions on Reliability(2022)

引用 0|浏览23
暂无评分
摘要
The success of deep neural networks and their potential use in many safety-critical applications has motivated research on formal verification of deep neural networks. A fundamental primitive enabling the formal analysis of neural networks is the output range analysis. Existing approaches on output range analysis either focus on some simple activation functions, such as relu,or compute a relaxed result for some activation functions, such as exponential linear unit elu. In this article, we propose an approach to compute the output range for feed-forward deep neural networks via linear programming. The key idea is to encode the activation functions, such as elu and sigmoid, as linear constraints in term of the line between the left and right end-points of the input range and the tangent lines on some special points in the input range. A strategy to partition the network to get a tighter range is presented. The experimental results show that our approach gets a tighter result than RobustVerifier on elu networks and sigmoid networks. Moreover, our approach performs better than (the linear encodings implemented in) Crown on elu networks with alpha =0.5, 1.0$ and sigmoid networks, and better than CNN-Cert and DeepCert on elu networks with alpha = 0.5 or 1.0. For elu networks with alpha = 2.0, our approach can achieve results that are closed to Crown, CNN-Cert, and DeepCert. Finally, we also found that the network partition helps to achieve a tighter result as well as to improve the efficiency for elu networks.
更多
查看译文
关键词
Neurons,Neural networks,Encoding,Linear programming,Deep learning,Upper bound,Taylor series,Deep neural networks (DNNs),ELU,linear programming (LP),output range analysis,sigmoid
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要