Verifying Collision Risk Estimation using Autonomous Driving Scenarios Derived from a Formal Model

J. Intell. Robotic Syst.(2023)

引用 0|浏览7
暂无评分
摘要
Autonomous driving technology is safety-critical and thus requires thorough validation. In particular, the probabilistic algorithms employed in perception systems of autonomous vehicles (AV) are notoriously hard to validate due to the wide range of possible critical scenarios. Such critical scenarios cannot be easily addressed with current manual validation methods, thus there is a need for an automatic and formal validation technique. To this end, we propose a new approach for perception component verification that, given a high-level and human-interpretable description of a critical situation, generates relevant AV scenarios and uses them for automatic verification. To achieve this goal, we integrate two recently proposed methods for the generation and the verification that are based on formal verification tools. First, we use formal conformance test generation tools to derive, from a verified formal model, sets of scenarios to be run in a simulator. Second, we model check the traces of the simulation runs to validate the probabilistic estimation of collision risks. Using formal methods brings the combined advantages of an increased confidence in the correct representation of the chosen configuration (temporal logic verification), a guarantee of the coverage and relevance of automatically generated scenarios (conformance testing), and an automatic quantitative analysis of the test execution (verification and statistical analysis on traces).
更多
查看译文
关键词
Behavior tree,CADP,CARLA,CMCDOT,Model-based testing,Model checking,Quantitative analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要