Probabilistic 3D Point Cloud Fusion on Graphics Processors for Automotive (Poster)

FUSION(2019)

引用 24|浏览3
暂无评分
摘要
Nowadays, Advanced Driver Assistance Systems (ADAS) support the driver in common driving situations by visually or auditory alerting the driver, or to some extent by intervening the driving. These systems fuse object lists and occupancy grids, generated within each separate sensor system. However, safe autonomous driving in adverse weather conditions or under (partial) sensor defects requires the fusion of sensor information at a lower abstraction level, as several accidents in the near past have shown. Multimodal, probabilistic sensor fusion can be achieved, e.g. by fusing the confidence of each sensor detection in a 3D occupancy grid. The free space between the sensor and detection is modelled using ray casting and inverse sensor models. The resulting, robust 3D representation of the vehicle environment can be used by subsequent algorithms for driving situation analysis and maneuver planning. Fusing the sensor data on a lower abstraction level, e.g. on 3D point level, exposes new challenges regarding data transfer and computation, due to the massively increased amount of data. As Graphics Processors exhibit a high degree of parallelism and are optimized for 3D graphics, this contribution presents a GPGPU implementation of the previously described 3D sensor fusion on point level. Three different data structures (3D memory voxel grid and two octree structures) are implemented and evaluated regarding memory consumption and runtime for the sensor fusion algorithm. As an evaluation platform, the Nvidia Jetson Xavier and Nvidia Tesla V100 are chosen for energy-efficiency and highest performance, respectively. Using the optimizations for parallel execution, the algorithm can be executed in realtime within less than 100 ms on the high-performance GPU.
更多
查看译文
关键词
sensor fusion,graphics processor,Bayes fusion,ray casting,inverse sensor model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要