QS-Hyper - A Quality-Sensitive Hyper Network for the No-Reference Image Quality Assessment.

ICONIP(2021)

引用 0|浏览3
暂无评分
摘要
Blind/no-reference image quality assessment (IQA) aims to provide a quality score for a single image without references. In this context, deep learning models can capture various image artifacts, which made significant progress in this study. However, current IQA methods generally utilize the pre-trained convolution neural networks (CNNs) on classification tasks to obtain image representations, which do not perfectly represent the quality of images. In order to solve this problem, this paper uses semi-supervised representation learning to train a quality-sensitive encoder (QS-encoder), which can extract image features specifically for image quality. Intuitively, this feature is more conducive to train the IQA model than the feature used for classification tasks. Thus, QS-encoder is plunged into a carefully designed hyper network to build a quality-sensitive hyper network (QS-hyper) to solve IQA tasks in more general and complex environments. Extensive experiments on the public IQA datasets show that our method outperformed most state-of-art methods on both Pearson linear correlation coefficient (PLCC) and Spearman’s rank correlation coefficient (SRCC), and it made 3% PLCC improvement and 3.9% SRCC improvement on TID2013 datasets. Therefore, it proves that our method is superior in capturing various image distortions, which meets a broader range of evaluation requirements.
更多
查看译文
关键词
qs-hyper,quality-sensitive,no-reference
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要