Depth Removal Distillation for RGB-D Semantic Segmentation

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2022)

引用 6|浏览10
暂无评分
摘要
RGB-D semantic segmentation is attracting wide attention due to its better performance than conventional RGB methods. However, most of RGB-D semantic segmentation methods need to acquire the real depth information for segmenting RGB images effectively. Therefore, it is extremely challenging to take full advantage of RGB-D semantic segmentation methods for segmenting RGB images without the depth input. To address this challenge, a general depth removal distillation method is proposed to remove depth dependence from RGB-D semantic segmentation model by knowledge distillation, which can be employed to any CNN-based segmentation network structure. Specifically, a depth-aware convolution is adopted to construct the teacher network for getting sufficient knowledge from RGB-D images. Then according to the structure consistency between depth-aware convolution and general convolution, the teacher network is used to transfer the learned knowledge to the student network with general convolutions by sharing parameters. Next, the student network makes up for the lack of depth in manner of learning by RGB images. Meantime, a Variable Temperature Cross Entropy (VTCE) loss function is proposed to further increase the accuracy of the student model by soft target distillation. Extensive experiments on NYUv2 and SUN RGB-D datasets demonstrate the superiority of our proposed approach.
更多
查看译文
关键词
RGB-D semantic segmentation,convolutional neural networks,knowledge distillation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要