PVTransformer: Point-to-Voxel Transformer for Scalable 3D Object Detection
arxiv(2024)
摘要
3D object detectors for point clouds often rely on a pooling-based PointNet
to encode sparse points into grid-like voxels or pillars. In this paper, we
identify that the common PointNet design introduces an information bottleneck
that limits 3D object detection accuracy and scalability. To address this
limitation, we propose PVTransformer: a transformer-based point-to-voxel
architecture for 3D detection. Our key idea is to replace the PointNet pooling
operation with an attention module, leading to a better point-to-voxel
aggregation function. Our design respects the permutation invariance of sparse
3D points while being more expressive than the pooling-based PointNet.
Experimental results show our PVTransformer achieves much better performance
compared to the latest 3D object detectors. On the widely used Waymo Open
Dataset, our PVTransformer achieves state-of-the-art 76.5 mAPH L2,
outperforming the prior art of SWFormer by +1.7 mAPH L2.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要