Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations.
AAAI Conference on Artificial Intelligence(2022)
摘要
We present a novel semidefinite programming (SDP) relaxation thatenables tight and efficient verification of neural networks. Thetightness is achieved by combining SDP relaxations with valid linearcuts, constructed by using the reformulation-linearisation technique(RLT). The computational efficiency results from a layerwise SDPformulation and an iterative algorithm for incrementally addingRLT-generated linear cuts to the verification formulation. The layerRLT-SDP relaxation here presented is shown to produce the tightest SDPrelaxation for ReLU neural networks available in the literature. Wereport experimental results based on MNIST neural networks showingthat the method outperforms the state-of-the-art methods whilemaintaining acceptable computational overheads. For networks ofapproximately 10k nodes (1k, respectively), the proposed methodachieved an improvement in the ratio of certified robustness casesfrom 0% to 82% (from 35% to 70%, respectively).
更多查看译文
关键词
Machine Learning (ML)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络