Embarrassingly Simple Binarization For Deep Single Imagery Super-Resolution Networks

IEEE TRANSACTIONS ON IMAGE PROCESSING(2021)

引用 9|浏览32
暂无评分
摘要
Deep convolutional neural networks (DCCNs) have shown pleasing performance in single image super-resolution (SISR). To deploy them onto real devices with limited storage and computational resources, a promising solution is to binarize the network, i.e., quantize each float-point weight and activation into 1 bit. However, existing works on binarizing DCNNs still suffer from severe performance degradation in SISR. To mitigate this problem, we argue that the performance degradation mainly comes from no appropriate constraint on the network weights, which causes it difficult to sensitively reverse the binarization results of these weights using the backpropagated gradient during training and thus limits the flexibility of network in respect of fitting extensive training samples. Inspired by this, we present an embarrassingly simple but effective binarization scheme for SISR, which can obviously relieve the performance degeneration resulted from network binarization and is applicable to different DCNN architectures. Specifically, we force each weight to follow a compact uniform prior, with which the weight will be given a very small absolute value close to zero and its binarization result can be straightforwardly reversed even by a small backpropagated gradient. By doing this, the flexibility and the generalization performance of the binarized network can be improved. Moreover, such a prior performs much better when introducing real identity shortcuts into the network. In addition, to avoid falling into bad local minima during training, we employ a pixel-wise curriculum learning strategy to learn the constrained weights in an easy-to-hard manner. Experiments on four SISR benchmark datasets demonstrate the effectiveness of the proposed binarization method in terms of binarizing different SISR network architectures, e.g., it even achieves performance comparable to the baseline with 5 quantization bits.
更多
查看译文
关键词
Training, Degradation, Quantization (signal), Performance evaluation, Superresolution, Computational modeling, Task analysis, Binarized neural network, single image super-resolution, curriculum learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要