To the Point: Efficient 3D Object Detection in the Range Image with Graph Convolution Kernels

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021(2021)

引用 58|浏览167
暂无评分
摘要
3D object detection is vital for many robotics applications. For tasks where a 2D perspective range image exists, we propose to learn a 3D representation directly from this range image view. To this end, we designed a 2D convolutional network architecture that carries the 3D spherical coordinates of each pixel throughout the network. Its layers can consume any arbitrary convolution kernel in place of the default inner product kernel and exploit the underlying local geometry around each pixel. We outline four such kernels: a dense kernel according to the bag-of-words paradigm, and three graph kernels inspired by recent graph neural network advances: the Transformer, the PointNet, and the Edge Convolution. We also explore cross-modality fusion with the camera image, facilitated by operating in the perspective range image view. Our method performs competitively on the Waymo Open Dataset and improves the state-of-the-art AP for pedestrian detection from 69.7% to 75.5%. It is also efficient in that our smallest model, which still outperforms the popular PointPillars in quality, requires 180 times fewer FLOPS and model parameters.
更多
查看译文
关键词
3D object detection,graph convolution kernels,robotics applications,2D perspective range image,2D convolutional network architecture,3D spherical coordinates,arbitrary convolution kernel,default inner product kernel,local geometry,dense kernel,bag-of-words paradigm,edge convolution,cross-modality fusion,camera image,perspective range image view,pedestrian detection,3D representation learning,graph neural network advances,transformer,PointNet,Waymo Open Dataset,PointPillars
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要