Multi-resolution distillation for self-supervised monocular depth estimation

PATTERN RECOGNITION LETTERS(2023)

引用 0|浏览1
暂无评分
摘要
Obtaining dense depth ground-truth is not trivial, which leads to the introduction of self-supervised monocular depth estimation. Most self-supervised methods utilize the photometric loss as the primary supervisory signal optimize a depth network. However, such self-supervised training often falls into an undesirable local minimum due to the ambiguity of the photometric loss. In this paper, we propose a novel self-distillation training scheme that provides a new self-supervision signal, depth consistency among different input resolutions, to the depth network. We further introduce a gradient masking strategy that adjusts the self-supervision signal of the depth consistency during back-propagation to boost the effectiveness of our depth consistency. Experiments demonstrate that our method brings meaningful performance improvements when applied to various depth network architectures. Furthermore, our method outperforms the existing self-supervised methods on KITTI, Cityscapes, and DrivingStereo datasets by a noteworthy margin.
更多
查看译文
关键词
Monocular depth estimation,Self-supervised learning,Self-distillation,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要