Multi-Sensor Fusion Based Off-Road Drivable Region Detection and Its ROS Implementation

2023 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET)(2023)

引用 0|浏览0
暂无评分
摘要
There is a growing demand for multi-sensor fusion based off-road drivable region detection in the field of autonomous vehicles and robotics. This technology allows for improved navigation and localization in off-road environments, such as rough terrain, by combining data from multiple sensors. This can lead to more accurate and reliable detection of drivable regions, which is crucial for the safe operation of autonomous vehicles in off-road environments. In this work, a deep learning architecture is employed to identify drivable and obstacle regions on images. It learns to classify and cluster the regions simultaneously using semantic segmentation. Further, a LiDAR-based ground segmentation method is introduced to classify drivable regions more effectively. The ground segmentation method splits the regions into small bins and applies the ground fitting technique with adaptive likelihood estimation. Finally, a late fusion method is proposed to fuse both results better to classify the drivable region. The entire fusion architecture was implemented on ROS. On the RELLIS3D dataset, the semantic segmentation achieves a mean accuracy of 84.3%. Furthermore, it is observed that certain regions misclassified by the semantic segmentation are corrected by LiDAR-based ground segmentation and the fusion provides a better representation of the drivable region.
更多
查看译文
关键词
off-road driving,image,LiDAR,multi-sensor fusion,ROS
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要