MODE: Multi-view Omnidirectional Depth Estimation with 360-degree Cameras

European Conference on Computer Vision(2022)

引用 0|浏览6
暂无评分
摘要
In this paper, we propose a two-stage omnidirectional depth estimation framework with multi-view 360 degrees cameras. The framework first estimates the depth maps from different camera pairs via omnidirectional stereo matching and then fuses the depth maps to achieve robustness against mud spots, water drops on camera lenses, and glare caused by intense light. We adopt spherical feature learning to address the distortion of panoramas. In addition, a synthetic 360 degrees dataset consisting of 12K road scene panoramas and 3K ground truth depth maps is presented to train and evaluate 360 degrees depth estimation algorithms. Our dataset takes soiled camera lenses and glare into consideration, which is more consistent with the real-world environment. Experimental results show that the proposed framework generates reliable results in both synthetic and realworld environments, and it achieves state-of-the-art performance on different datasets. The code and data are available at https://github.com/ nju- ee/MODE- 2022.
更多
查看译文
关键词
Omnidirectional depth estimation,Stereo matching,Spherical feature learning,360 degrees cameras,Multi-view
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要