Every Pixel Counts ++: Joint Learning of Geometry and Motion with 3D Holistic Understanding.

IEEE transactions on pattern analysis and machine intelligence(2019)

引用 240|浏览0
暂无评分
摘要
Learning to estimate 3D geometry in a single frame and optical flow from consecutive frames by watching unlabeled videos via deep convolutional network has made significant progress recently. Current state-of-the-art (SoTA) methods treat the two tasks independently. One important assumption of the existing depth estimation methods is that the scenes contain no moving object. In this paper, we propose to address the two tasks as a whole, i.e. to jointly understand per-pixel 3D geometry and motion. This eliminates the need of static scene assumption and enforces the inherent geometrical consistency during the learning process, yielding significantly improved results for both tasks. We call our method as "Every Pixel Counts++" or "EPC++". Various loss terms are formulated to jointly supervise the learning across geometrical cues and effective adaptive training strategy is proposed to achieve better performance. Comprehensive experiments were conducted on datasets with different scenes, including driving scenario (KITTI 2012 and KITTI 2015 datasets), mixed outdoor/indoor scenes (Make3D) and synthetic animation (MPI Sintel dataset). Performance on the five tasks of depth estimation, optical flow estimation, odometry, moving object segmentation and scene flow estimation shows that our approach outperforms other SoTA methods, demonstrating the effectiveness of each module of our proposed method.
更多
查看译文
关键词
Depth estimation,optical flow prediction,unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要