Hybrid knowledge distillation from intermediate layers for efficient Single Image Super-Resolution

Neurocomputing(2023)

引用 0|浏览10
暂无评分
摘要
Convolutional and Transformer models have achieved remarkable results for Single Image Super-Resolution (SISR). However, the tremendous memory and computation consumption of these models restricts their usage in resource-limited scenarios. Knowledge distillation, as an effective model compression technique, has received great research focus on the SISR task. In this paper, we propose a novel efficient SISR method via hybrid knowledge distillation from intermediate layers, termed HKDSR, which leverages the knowledge from frequency information into that RGB information. To accomplish this, we first pre-train the teacher with multiple intermediate upsampling layers to generate the intermediate SR outputs. We then construct two kinds of intermediate knowledge from the Frequency Similarity Matrix (FSM) and Adaptive Channel Fusion (ACF). FSM aims to mine the relationship of frequency similarity between the Ground-truth (GT) HR image, and the intermediate SR outputs of teacher and student by Discrete Wavelet Transformation. ACF merges the intermediate SR output of the teacher and GT HR image in a channel dimension to adaptively align the intermediate SR output of the student. Finally, we leverage the knowledge from FSM and ACF into reconstruction loss to effectively improve student performance. Extensive experiments demonstrate the effectiveness of HKDSR on different benchmark datasets and network architectures.
更多
查看译文
关键词
Image super-resolution,Model compression,Knowledge distillation,Discrete wavelet transformation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要