Verification-friendly Networks: the Case for Parametric ReLUs.

IJCNN(2023)

引用 0|浏览3
暂无评分
摘要
It has increasingly been recognised that verification can contribute to the validation and debugging of neural networks before deployment, particularly in safety-critical areas. While progress has been made in the area of verification of neural networks, present techniques still do not scale to large ReLUbased neural networks used in many applications. In this paper we show that considerable progress can be made by employing Parametric ReLU activation functions in lieu of plain ReLU functions. We give training procedures that produce networks which achieve one order of magnitude gain in verification overheads and 30-100% fewer timeouts with VeriNet, a SoA Symbolic Interval Propagation-based verification toolkit, while not compromising the resulting accuracy. Furthermore, we show that adversarial training combined with our approach improves certified robustness up to 36% compared to adversarial training performed on baseline ReLU networks.
更多
查看译文
关键词
adversarial training,baseline ReLU networks,debugging,Parametric ReLU activation functions,plain ReLU functions,ReLU-based neural networks,safety-critical areas,SoA symbolic interval propagation-based verification toolkit,verification overheads,verification-friendly networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要