Pointflownet: Learning Representations For Rigid Motion Estimation From Point Clouds

2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)(2019)

引用 114|浏览124
暂无评分
摘要
Despite significantprogress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity requiredby many applications. Simultaneously, these applicationsare often not restrictedto image-basedestimation: laserscannersprovide a popular alternativeto traditionalcameras,for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper we propose to estimate 3D motionfrom such unstructuredpoint clouds using a deep neural network. In a singleforwardpass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospectof estimating 3D scene flow from unstructuredpoint clouds ispromising, it is also a challenging task. We show that the traditionalglobal representationof rigid body motion prohibitsinference by CNNs, andpropose a translationequivariantrepresentation to circumvent this problem. For trainingour deep network, a large dataset is required.Because of this, we augment real scansfrom KITJI with virtualobjects, realisticallymodeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-basedtechniques highlights the robustnessof the proposedapproach.
更多
查看译文
关键词
rigid motion estimation,pointflownet clouds,learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要