Chordal sparsity for SDP-based neural network verification

arxiv(2024)

引用 0|浏览17
暂无评分
摘要
Neural networks are central to many emerging technologies, but verifying their correctness remains a major challenge. It is known that network outputs can be sensitive and fragile to even small input perturbations, thereby increasing the risk of unpredictable and undesirable behavior. Fast and accurate verification of neural networks is therefore critical to their widespread adoption, and in recent years various methods have been developed as a response to this problem. In this paper, we focus on improving semidefinite programming (SDP) based techniques for neural network verification. Such techniques offer the power of expressing complex geometric constraints while retaining a convex problem formulation, but scalability remains a major issue in practice. Our starting point is the DeepSDP framework proposed by Fazlyab et al., which uses quadratic constraints to abstract the verification problem into a large-scale SDP. However, solving this SDP quickly becomes intractable when the network grows. Our key observation is that by leveraging chordal sparsity, we can decompose the primary computational bottleneck of DeepSDP — a large linear matrix inequality (LMI) — into an equivalent collection of smaller LMIs. We call our chordally sparse optimization program Chordal-DeepSDP and prove that its construction is identically expressive as that of DeepSDP. Moreover, we show that additional analysis of Chordal-DeepSDP allows us to further rewrite its collection of LMIs in a second level of decomposition that we call Chordal-DeepSDP-2 — which results in another significant computational gain. Finally, we provide numerical experiments on real networks of learned cart–pole dynamics, showcasing the computational advantage of Chordal-DeepSDP and Chordal-DeepSDP-2 over DeepSDP.
更多
查看译文
关键词
Neural networks,Convex optimization,Safety verification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要