DriveWorld: 4D Pre-trained Scene Understanding via World Models for Autonomous Driving
arxiv(2024)
摘要
Vision-centric autonomous driving has recently raised wide attention due to
its lower cost. Pre-training is essential for extracting a universal
representation. However, current vision-centric pre-training typically relies
on either 2D or 3D pre-text tasks, overlooking the temporal characteristics of
autonomous driving as a 4D scene understanding task. In this paper, we address
this challenge by introducing a world model-based autonomous driving 4D
representation learning framework, dubbed DriveWorld, which is capable
of pre-training from multi-camera driving videos in a spatio-temporal fashion.
Specifically, we propose a Memory State-Space Model for spatio-temporal
modelling, which consists of a Dynamic Memory Bank module for learning
temporal-aware latent dynamics to predict future changes and a Static Scene
Propagation module for learning spatial-aware latent statics to offer
comprehensive scene contexts. We additionally introduce a Task Prompt to
decouple task-aware features for various downstream tasks. The experiments
demonstrate that DriveWorld delivers promising results on various autonomous
driving tasks. When pre-trained with the OpenScene dataset, DriveWorld achieves
a 7.5
online mapping, a 5.0
decrease in minADE for motion forecasting, a 3.0
prediction, and a 0.34m reduction in average L2 error for planning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要