Evidential deep learning-based multi-modal environment perception for intelligent vehicles

IV(2023)

引用 0|浏览8
暂无评分
摘要
Intelligent vehicles (IVs) are pursued in both research laboratories and industries to revolutionize transportation systems. Since the driving surroundings can be cluttered and the weather conditions may vary, environment perception in IVs represents a challenging task. Therefore, multi-modal sensors are engaged. In perception, outstanding performance is obtained by employing deep learning algorithms. However, deep learning often relies on probabilities while there is a better formalism to handle prediction uncertainty. To circumvent this, in this work, evidence theory is combined with a camera-lidar-based deep learning fusion architecture. The coupling is based on generating basic belief functions using distance to prototypes. It also uses a distance-based decision rule. Because IVs have constrained computational power, a reduced deep-learning architecture is leveraged in this formulation. In the task of road detection, the evidential approach outperforms the probabilistic one. Besides, ambiguous features can be prudently settled as ignorance rather than making a possibly wrong decision using probability. The coupling is also extended to the task of semantic segmentation. This shows how evidential formulation can be easily adapted to the multi-class case. Therefore, the evidential formulation is generic and produces a more accurate and versatile prediction while maintaining the trade-off between performances and computational costs in IVs. This work uses the KITTI dataset.
更多
查看译文
关键词
intelligent vehicles, environment perception, evidence theory, deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要