Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search

Hailong Ma
Hailong Ma
Ruijun Xu
Ruijun Xu
Qingyuan Li
Qingyuan Li

arXiv: Computer Vision and Pattern Recognition, Volume abs/1901.07261, 2019.

Cited by: 1|Bibtex|Views147|Links
EI
Keywords:
peak signal noise ratioevolutionary algorithmneural architecture searchreinforcement learningroulette wheel selectionMore(2+)
Weibo:
We presented a novel elastic method for NAS that incorporates both micro and macro search, dealing with neural architectures in multi-granularity

Abstract:

Deep convolution neural networks demonstrate impressive results in the super-resolution domain. A series of studies concentrate on improving peak signal noise ratio (PSNR) by using much deeper layers, which are not friendly to constrained resources. Pursuing a trade-off between the restoration capacity and the simplicity of models is stil...More

Code:

Data:

0
Introduction
  • Introduction and Related Works

    As a classical task in computer vision, single image superresolution (SISR) is aimed to restore a high-resolution image from a degraded low-resolution one, which is known as an illposed inverse procedure.
  • Neural architecture search has produced dominating models in classification tasks [Zoph and Le, 2016; Zoph et al, 2017].
  • Following this trend, a novel work by [Chu et al, 2019] has shed light on the SISR task with a reinforced evolutionary search method, which has achieved results outperforming some notable networks including VDSR [Kim et al, 2016a].
  • The authors' main contributions can be summarized in the following four aspects,
Highlights
  • Introduction and Related Works

    As a classical task in computer vision, single image superresolution (SISR) is aimed to restore a high-resolution image from a degraded low-resolution one, which is known as an illposed inverse procedure
  • At the end of the incomplete training, we evaluate mean square errors on test datasets
  • About 10k models are generated in total, where the population for each iteration is 64
  • Each model is trained with a batch size of 16 for 200 epochs
  • We presented a novel elastic method for NAS that incorporates both micro and macro search, dealing with neural architectures in multi-granularity
Methods
  • 6.1 Setup

    In the experiment, about 10k models are generated in total, where the population for each iteration is 64.
  • The learning rate is initialized as 10−4 and kept unchanged at this stage
Conclusion
  • The authors presented a novel elastic method for NAS that incorporates both micro and macro search, dealing with neural architectures in multi-granularity.
  • Different from human-designed and singleobjective NAS models, the methods can generate different tastes of models by one run, ranging from fast and lightweight to relatively large and more accurate.
  • It offers a feasible way for engineers to compress existing popular human-designed models or to design various levels of architectures for constrained devices.
Summary
  • Introduction:

    Introduction and Related Works

    As a classical task in computer vision, single image superresolution (SISR) is aimed to restore a high-resolution image from a degraded low-resolution one, which is known as an illposed inverse procedure.
  • Neural architecture search has produced dominating models in classification tasks [Zoph and Le, 2016; Zoph et al, 2017].
  • Following this trend, a novel work by [Chu et al, 2019] has shed light on the SISR task with a reinforced evolutionary search method, which has achieved results outperforming some notable networks including VDSR [Kim et al, 2016a].
  • The authors' main contributions can be summarized in the following four aspects,
  • Methods:

    6.1 Setup

    In the experiment, about 10k models are generated in total, where the population for each iteration is 64.
  • The learning rate is initialized as 10−4 and kept unchanged at this stage
  • Conclusion:

    The authors presented a novel elastic method for NAS that incorporates both micro and macro search, dealing with neural architectures in multi-granularity.
  • Different from human-designed and singleobjective NAS models, the methods can generate different tastes of models by one run, ranging from fast and lightweight to relatively large and more accurate.
  • It offers a feasible way for engineers to compress existing popular human-designed models or to design various levels of architectures for constrained devices.
Tables
  • Table1: Comparisons with the state-of-the-art methods based on ×2 super-resolution task
Download tables as Excel
Study subjects and analysis
objectives on four datasets: 3
At a comparable level of FLOPS, our model called FALSR-A (Figure 3) outperforms CARN [Ahn et al, 2018] with higher scores. In addition, it dominates DRCN [Kim et al, 2016b] and MoreMNAS-A [Chu et al, 2019] over three objectives on four datasets. Moreover, it achieves higher PSNR and SSIM with fewer FLOPS than VDSR [Kim et al, 2016a], DRRN [Tai et al, 2017a] and many others

Reference
  • [Ahn et al., 2018] Namhyuk Ahn, Byungkon Kang, and Kyung-Ah Sohn. Fast, accurate, and, lightweight super-
    Google ScholarFindings
  • (1) Ground Truth
    Google ScholarFindings
  • (7) Ground Truth
    Google ScholarFindings
  • (13) Ground Truth
    Google ScholarFindings
  • (19) Ground Truth
    Google ScholarFindings
Your rating :
0

 

Tags
Comments