Gradually Growing Residual And Self-Attention Based Dense Deep Back Projection Network For Large Scale Super-Resolution Of Image

PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2019, PT I(2019)

引用 0|浏览10
暂无评分
摘要
Due to the strong capacity of deep learning in handling unstructured data, it has been utilized for the task of single image super-resolution (SISR). These algorithms have shown promising results for small scale super-resolution but are not robust to large scale super-resolution. In addition, these algorithms are computationally complex and require high-end computational devices. Developing large-scale super-resolution framework finds its application in smart-phones as these devices have limited computational power. In this context, we present a novel light-weight architecture-Gradually growing Residual and self-Attention based Dense Deep Back Projection Network (GRAD-DBPN) for large scale image super-resolution (SR). The network is made of cascaded self-Attention based Residual Dense Deep Back Projection Network (ARD-DBPN) blocks to perform super-resolution gradually. Where each block performs 2X super-resolution and fine tuned in an end to end manner. The residual architecture facilitates the faster convergence of network and overcomes the issue of vanishing gradient. Experimental results on different benchmark data-set have been presented to compare the efficacy and effectiveness of the architecture.
更多
查看译文
关键词
Large Scale Super-Resolution, Gradual, Residual, Dense, Deep Back Projection Network (DBPN), Self-attention, Spectral normalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要