WOMD-LiDAR: Raw Sensor Dataset Benchmark for Motion Forecasting
CoRR(2023)
摘要
Widely adopted motion forecasting datasets substitute the observed sensory
inputs with higher-level abstractions such as 3D boxes and polylines. These
sparse shapes are inferred through annotating the original scenes with
perception systems' predictions. Such intermediate representations tie the
quality of the motion forecasting models to the performance of computer vision
models. Moreover, the human-designed explicit interfaces between perception and
motion forecasting typically pass only a subset of the semantic information
present in the original sensory input. To study the effect of these modular
approaches, design new paradigms that mitigate these limitations, and
accelerate the development of end-to-end motion forecasting models, we augment
the Waymo Open Motion Dataset (WOMD) with large-scale, high-quality, diverse
LiDAR data for the motion forecasting task.
The new augmented dataset WOMD-LiDAR consists of over 100,000 scenes that
each spans 20 seconds, consisting of well-synchronized and calibrated high
quality LiDAR point clouds captured across a range of urban and suburban
geographies (https://waymo.com/open/data/motion/). Compared to Waymo Open
Dataset (WOD), WOMD-LiDAR dataset contains 100x more scenes. Furthermore, we
integrate the LiDAR data into the motion forecasting model training and provide
a strong baseline. Experiments show that the LiDAR data brings improvement in
the motion forecasting task. We hope that WOMD-LiDAR will provide new
opportunities for boosting end-to-end motion forecasting models.
更多查看译文
关键词
raw sensor dataset benchmark,forecasting,motion,womd-lidar
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要