Enhanced Lightweight Network with CNN and Improved Transformer for Image Super-Resolution.

Hengyu Li,Juan Wang ,Jie Liu,Ding Chen, Lei Shi

ICNC-FSKD(2023)

引用 0|浏览9
暂无评分
摘要
In recent years, there have been significant advancements in deep learning-based lightweight image super-resolution (SR) reconstruction techniques. However, in practical applications, there are still challenges to be addressed. Many existing lightweight SR reconstruction algorithms simplify the model by reducing the number of model parameters or changing the combination of convolutions, which results in significantly decreased model performance and stability, leading to poor reconstruction results. To address this issue, we proposed a Cosine Self-Attention mechanism and deepens the network depth to improve the Swin Transformer, enhanced the model’s performance and stability. Experimental results show that the proposed approach achieves stronger and more stable reconstruction performance with lower parameter count than existing lightweight SR models, outperforming them in reconstruction results and model complexity. The PSNR/SSIM of our proposed method for ×2 and ×4 SR reconstruction on the Set5, Set14, BSD100, Urban100, and Manga109 datasets exceed those of most existing lightweight SR models.
更多
查看译文
关键词
Deep learning,Image super-resolution,Attention mechanism,Swin Transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要