New threat on formal verification for neural networks: example and fault tolerance

IFAC-PapersOnLine(2022)

引用 0|浏览6
暂无评分
摘要
This article details a new threat to NN formal verification that is well known in the formal verification of classical systems: errors in the learned model of a NN could cause the NN to pass formal verification on a property while violating the same property in real life. The solution to this threat for classical systems (which is expert reviews) is inadequate for NN due to their lack of explainability. Here, we propose a detection and recovery mechanism to tolerate it. This mechanism is based on a mathematical diversification of the system's model and the online verification of the formal safety properties. It was successfully implemented and validated on an application example, which, to our knowledge, is one of the most concrete NN formal verification in the literature: the Adaptive Cruise Control function of an autonomous car.
更多
查看译文
关键词
Dependability,Neural approximations for optimal control,estimation,Applications and fault tolerant control,fault tolerant control,reconfigurable control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要