Distortion-aware Depth Estimation with Gradient Priors from Panoramas of Indoor Scenes

2022 International Conference on 3D Vision (3DV)(2022)

引用 0|浏览36
暂无评分
摘要
Compared to 2D perspective images, panoramic images capture a larger field-of-view (FOV). Depth estimation from panoramas is an important task for 3D scene understanding and has made significant progress with the development of CNNs. However, existing CNN-based methods still suffer from the Equirectangular Projection (ERP) problem to deal with panoramic distortions (e.g. same receptive fields near the equator and the two poles) and have difficulty generating accurate depth boundaries. In contrast to existing CNN-based methods, in this paper, a novel Transformer-based method is proposed which is able to cope with panoramic distortions and to generate accurate depth boundaries. A Distortion-aware Transformer is designed using a yaw-invariant cycle shift and a distortion-guided partitioning. The aim is to alleviate the distortion effect by enlarging the receptive fields in both horizontal and vertical directions. Then, a Gradient Transformer is proposed to enhance the features around the boundaries. Gradient information is adopted as a boundary prior. Large-scale experimental results show an improvement compared to state-of-the-art methods. Our method also shows strong generalization capabilities. Finally, our method is extended to panorama semantic segmentation.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要