Vis-MVSNet: Visibility-Aware Multi-view Stereo Network

BMVC(2022)

引用 19|浏览125
暂无评分
摘要
Learning-based multi-view stereo (MVS) methods have demonstrated promising results. However, very few existing networks explicitly take the pixel-wise visibility into consideration, resulting in erroneous cost aggregation from occluded pixels. In this paper, we explicitly infer and integrate the pixel-wise occlusion information in the MVS network via the matching uncertainty estimation. The pair-wise uncertainty map is jointly inferred with the pair-wise depth map, which is further used as weighting guidance during the multi-view cost volume fusion. As such, the adverse influence of occluded pixels is suppressed in the cost fusion. The proposed framework Vis-MVSNet significantly improves depth accuracy in reconstruction scenes with severe occlusion. Extensive experiments are performed on DTU , BlendedMVS , Tanks and Temples and ETH3D datasets to justify the effectiveness of the proposed framework.
更多
查看译文
关键词
Multi-view stereo, Visibility, MVSNet
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要