Deep HyFeat Based Attention in Attention Model for Face Super-Resolution.

IEEE Trans. Instrum. Meas.(2023)

引用 2|浏览1
暂无评分
摘要
Face super-resolution (SR) is the task of generating high-resolution (HR) face images from the low-resolution (LR) inputs. Recently, deep learning-based methods have shown remarkable progress in the SR field. Most of the methods perform additional tasks such as face parsing, landmark, and attention to generate the HR images. However, parsing maps and landmark guided models require the supplementary labeled dataset, which is difficult to obtain in real life. The attention mechanism does not require the datasets extra labeling and is also beneficial for face SR. These methods focus on a few critical features and ignore the remaining ones, which sometime causes to ignore the valuable features. Therefore, this article proposes a novel deep hybrid feature (HyFeat)-based Attention in Attention model for face SR. Moreover, the proposed model uses the coarse SR network and deep convolutional neural network (CNN) to generate the HR image. A coarse SR network is applied to upsample the LR image and generate the coarse super-resolved image, which is further sent to the deep CNN model. The proposed work incorporates the HyFeat attention in attention unit (HyFA(2)U), which consists of HyFeat block and attention in attention block (A(2)B) in the deep CNN model to improve the visual quality of the output face images. HyFeat block assists the model in extracting the coarse features and learning the enriched contextual information to enhance the details of coarse features. A(2)B preserves the attentive and non-attentive beneficial features while suppressing the unwanted features. The attention branch focuses on specific facial features and ignores the rest of the features. The non-attention branch aims to learn the informative features that the attention branch ignores. The proposed model repeats the HyFA(2)U units to focus on different facial components and enhance the features to improve the quality of resultant faces. Experimental outcomes exhibit that the proposed model gains state-of-the-art performance on the standard datasets, namely CelebAHQ, Helen, FFHQ, and LFW face. The proposed method achieves an improvement of more than 0.35 dB in peak signal-tonoise ratio (PSNR) and 0.012 in structure similarity (SSIM) on different datasets over the best models available in the literature.
更多
查看译文
关键词
Faces,Feature extraction,Image reconstruction,Task analysis,Superresolution,Convolutional neural networks,Training,Attention in attention block (A(2)B),convolutional neural network (CNN),face hallucination,face super-resolution (SR),hybrid feature (HyFeat),hybrid feature attention in attention unit (HyFA(2)U)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要