Verifying Controllers With Vision-Based Perception Using Safe Approximate Abstractions

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2022)

引用 16|浏览28
暂无评分
摘要
Fully formal verification of perception models is likely to remain challenging in the foreseeable future, and yet these models are being integrated into safety-critical control systems. We present a practical method for reasoning about the safety of such systems. Our method is based on systematically constructing approximations of perception models from system-level safety requirements, data, and program analysis of the modules that are downstream from perception. These approximations have some desirable properties like being low-dimensional, intelligible, and tractable. The closed-loop system, with the approximation substituting the actual perception model, is verified to be safe. Establishing the formal relationship between the actual and the approximate perception models remains well beyond available verification techniques. However, we do provide a useful empirical measure of their closeness called precision. Overall, our method can tradeoff the size of the approximation against precision. We apply the method to two significant case studies: 1) a vision-based lane tracking controller for an autonomous vehicle and 2) a controller for an agricultural robot. We show how the generated approximations for each system can be composed with the downstream modules and be verified using program analysis tools like CBMC. Detailed evaluations of the impacts of size, and the environmental parameters (e.g., lighting, road surface, and plant type) on the precision of the generated approximations suggest that the approach can be useful for realistic control systems.
更多
查看译文
关键词
Abstraction,autonomous systems,formal verification,vision-based control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要