VoxelScape: Large Scale Simulated 3D Point Cloud Dataset of Urban Traffic Environments

IEEE Transactions on Intelligent Transportation Systems(2023)

引用 0|浏览6
暂无评分
摘要
Having a profound understanding of the surrounding environment is considered one of the crucial tasks for the reliable operation of future self-driving cars. Light Detection and Ranging (LiDAR) sensor plays a critical role in achieving such understanding due to its capability to perceive the world in 3D. Similar to 2D perception tasks, current state-of-the-art methods in 3D perception tasks rely on deep neural networks (DNNs). However, the performance of 3D perception tasks, specially point-wise semantic segmentation, is not on par with their 2D counterparts. One of the main reasons is the lack of publicly available labelled 3D point cloud datasets (PCDs) from 3D LiDAR sensors. In this work, we are introducing the VoxelScape dataset, a large-scale simulated 3D PCD with 100K annotated point cloud scans. The annotations in the VoxelScape dataset include both point-wise semantic labels and 3D bounding box labels. Additionally, we used a number of baseline approaches to validate the transferability of VoxelScape to real 3D PCD for two challenging 3D perception tasks. The promising results have shown that training DNNs on VoxelScape boosted the performance of the 3D perception tasks on the real PCD. Furthermore, we are also releasing the proposed data generation pipeline for the research community to facilitate realistic simulation of 3D LiDAR point cloud data for different scenarios beyond those covered in our VoxelScape dataset. The VoxelScape dataset and the corresponding LiDAR simulation codes are publicly available at https://voxel-scape.github.io/dataset
更多
查看译文
关键词
Three-dimensional displays,Laser radar,Annotations,Sensors,Task analysis,Semantics,Point cloud compression,Light Detection and Ranging (LiDAR),simulation,3D object detection,3D semantic segmentation,annotations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要