Learning to Restore Low-Light Images via Decomposition-and-Enhancement

CVPR, pp. 2278-2287, 2020.

Cited by: 0|Bibtex|Views62|Links
EI
Keywords:
new low light imagedifferent frequency layerlow light image enhancementimage enhancementsignal-to-noise ratioMore(3+)
Weibo:
We have presented a network with the proposed Attention to Context Encoding module for adaptively enhancing the high and low frequency layers, and Cross Domain Transformation module for noise suppression and detail enhancement

Abstract:

Low-light images typically suffer from two problems. First, they have low visibility (i.e., small pixel values). Second, noise becomes significant and disrupts the image content, due to low signal-to-noise ratio. Most existing lowlight image enhancement methods, however, learn from noise-negligible datasets. They rely on users having good...More

Code:

Data:

0
Introduction
  • Low-light imaging is very popular, for various purposes, e.g., night-time surveillance and personal scenery imaging at sunset.
  • The visibility of low-light images in the standard RGB space does not match with human perception, due to quantization.
  • Typical image enhancement methods [46, 51, 24, 7, 40, 34, 48, 4] propose to recover low-light images to match with human perception.
  • These methods rely on users to have good photographic skills in taking images with low noise, so that these methods can focus on learning to manipulate (a) sRGB input (b) Hist. eq (c) Low-freq. (d) High-freq
Highlights
  • Low-light imaging is very popular, for various purposes, e.g., night-time surveillance and personal scenery imaging at sunset
  • We propose a novel neural network that leverages an Attention to Context Encoding (ACE) module to adaptively select low-frequency information for recovering the low-frequency layer and noise removal in the first stage, and select high-frequency information for detail enhancement in the second stage
  • With an Attention to Context Encoding (ACE) module to decompose the input image for adaptively enhancing the high-/low-frequency layers and a Cross Domain Transformation (CDT) module for noise suppression and detail enhancement
  • We have studied the noisy low-light image enhancement problem
  • We propose a novel frequency-based image decomposition-and-enhancement model to adaptively enhance the image contents and details in different frequency layers, while at the same time suppressing noise
  • We have conducted extensive experiments to show that the proposed method outperforms state-of-the-art approaches in enhancing practical noisy low-light images
  • We have presented a network with the proposed Attention to Context Encoding (ACE) module for adaptively enhancing the high and low frequency layers, and Cross Domain Transformation (CDT) module for noise suppression and detail enhancement
Methods
  • JieP [5] WVM [14] DeepUPE [40] DeepUPE∗ [40] DRHT [46] HDRCNN [12] DSLR [24] LIME [20] SID [6] SID∗ [6] Ours RAW.
  • SID [6] DeepUPE [40] Input PSNR↑ SSIM↑ LIME [20].
  • LIME [20] + BM3D [11] 17.90 0.3610.
Results
  • The authors have conducted extensive experiments to show that the proposed method outperforms state-of-the-art approaches in enhancing practical noisy low-light images.
Conclusion
  • Conclusion and Future Work

    In this paper, the authors have studied the noisy low-light image enhancement problem.
  • The authors have observed that noise affects images differently in different frequency layers
  • Based on this observation, the authors propose a novel frequency-based image decomposition-and-enhancement model to adaptively enhance the image contents and details in different frequency layers, while at the same time suppressing noise.
  • The authors have presented a network with the proposed Attention to Context Encoding (ACE) module for adaptively enhancing the high and low frequency layers, and Cross Domain Transformation (CDT) module for noise suppression and detail enhancement.
  • The authors have conducted extensive experiments to verify the effectiveness of the method against state-of-the-art methods
Summary
  • Introduction:

    Low-light imaging is very popular, for various purposes, e.g., night-time surveillance and personal scenery imaging at sunset.
  • The visibility of low-light images in the standard RGB space does not match with human perception, due to quantization.
  • Typical image enhancement methods [46, 51, 24, 7, 40, 34, 48, 4] propose to recover low-light images to match with human perception.
  • These methods rely on users to have good photographic skills in taking images with low noise, so that these methods can focus on learning to manipulate (a) sRGB input (b) Hist. eq (c) Low-freq. (d) High-freq
  • Methods:

    JieP [5] WVM [14] DeepUPE [40] DeepUPE∗ [40] DRHT [46] HDRCNN [12] DSLR [24] LIME [20] SID [6] SID∗ [6] Ours RAW.
  • SID [6] DeepUPE [40] Input PSNR↑ SSIM↑ LIME [20].
  • LIME [20] + BM3D [11] 17.90 0.3610.
  • Results:

    The authors have conducted extensive experiments to show that the proposed method outperforms state-of-the-art approaches in enhancing practical noisy low-light images.
  • Conclusion:

    Conclusion and Future Work

    In this paper, the authors have studied the noisy low-light image enhancement problem.
  • The authors have observed that noise affects images differently in different frequency layers
  • Based on this observation, the authors propose a novel frequency-based image decomposition-and-enhancement model to adaptively enhance the image contents and details in different frequency layers, while at the same time suppressing noise.
  • The authors have presented a network with the proposed Attention to Context Encoding (ACE) module for adaptively enhancing the high and low frequency layers, and Cross Domain Transformation (CDT) module for noise suppression and detail enhancement.
  • The authors have conducted extensive experiments to verify the effectiveness of the method against state-of-the-art methods
Tables
  • Table1: Comparison to the state-of-the-art enhancement methods. Best performance is marked in bold. Note that an ∗ indicates that the model is retrained on our sRGB traning set
  • Table2: Comparison to different combinations of enhancement and denoising methods. Best performance is marked in bold
  • Table3: Internal analysis of the proposed method
Download tables as Excel
Related work
  • Low-light image enhancement. A line of methods enhance low-light images using different image-to-image regression functions. Represented by histogram equalization [36] and gamma correction, global and local contrast enhancement operators are proposed based on detecting semantic regions (e.g., face and sky) [25], matching region templates [23] or contrast statistics in image boundaries and textured regions [38]. Advanced deep learning based methods learn the mapping functions from high-quality user retouched images or images taken using high-end cameras, using bilateral learning [15], intermediate HDR supervision [46], adversarial learning [24, 7], or reinforcement learning [34, 48]. Another line of works are retinex-based image enhancement methods [20, 14, 51, 5, 40, 47], which decompose the input low-light image into illumination and reflectance, and then enhance the illumination of the image.

    However, existing enhancement methods may fail to recover low-light images, due to their low SNRs, as shown in Figure 2. The key reason is that these methods [24, 34, 7, 48, 46] typically assume the images to be taken by photographic experts with insignificant noise levels. Hence, they are unable to enhance noisy low-light images.
Funding
  • This work was partly supported by NNSFC Grants 91748104, 61972067, 61632006, U1811463, U1908214, 61751203; and the National Key Research and Development Program of China, Grant 2018AAA0102003
Reference
  • Abdelrahman Abdelhamed, Stephen Lin, and Michael Brown. A high-quality denoising dataset for smartphone cameras. In CVPR, 2018. 3
    Google ScholarLocate open access versionFindings
  • Tim Brooks, Ben Mildenhall, Tianfan Xue, Jiawen Chen, Dillon Sharlet, and Jonathan T. Barron. Unprocessing images for learned raw denoising. In CVPR, 2019. 3, 5
    Google ScholarLocate open access versionFindings
  • Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In CVPR, 2005. 2
    Google ScholarLocate open access versionFindings
  • Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Fredo Durand. Learning photographic global tonal adjustment with a database of input / output image pairs. In CVPR, 2011. 1
    Google ScholarLocate open access versionFindings
  • Bolun Cai, Xianming Xu, Kailing Guo, Kui Jia, Bin Hu, and Dacheng Tao. A joint intrinsic-extrinsic prior model for retinex. In ICCV, 2017. 2, 5, 6, 7, 8
    Google ScholarLocate open access versionFindings
  • Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In CVPR, 2018. 1, 2, 4, 5, 6, 7, 8
    Google ScholarLocate open access versionFindings
  • Yu-Sheng Chen, Yu-Ching Wang, Man-Hsin Kao, and YungYu Chuang. Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans. In CVPR, 2018. 1, 2
    Google ScholarLocate open access versionFindings
  • Qi Chu, Wanli Ouyang, Hongsheng Li, Xiaogang Wang, Bin Liu, and Nenghai Yu. Online multi-object tracking using CNN-based single object tracker with spatial-temporal attention mechanism. In ICCV, 2017. 1
    Google ScholarLocate open access versionFindings
  • Wikipedia contributors. Color temperature. Available from: https://en.wikipedia.org/wiki/Color_temperature.5
    Findings
  • Wikipedia contributors. sRGB. Available from: https://en.wikipedia.org/wiki/SRGB.2
    Findings
  • Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising with block-matching and 3D filtering. In Proc. SPIE, volume 6064, 2006. 1, 2, 7, 8
    Google ScholarLocate open access versionFindings
  • Gabriel Eilertsen, Joel Kronander, Gyorgy Denes, Rafa Mantiuk, and Jonas Unger. HDR image reconstruction from a single exposure using deep CNNs. ACM TOG, 2017. 2, 5, 6, 7, 8
    Google ScholarLocate open access versionFindings
  • Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE TIP, 2006. 2
    Google ScholarFindings
  • Xueyang Fu, Delu Zeng, Yue Huang, Xiaoping Zhang, and Xinghao Ding. A weighted variational model for simultaneous reflectance and illumination estimation. In CVPR, 2016. 2, 5, 6, 7, 8
    Google ScholarLocate open access versionFindings
  • Michael Gharbi, Jiawen Chen, Jonathan Barron, Samuel Hasinoff, and Fredo Durand. Deep bilateral learning for realtime image enhancement. In SIGGRAPH, 2017. 2
    Google ScholarLocate open access versionFindings
  • A. Gijsenij, T. Gevers, and J. van de Weijer. Computational color constancy: Survey and experiments. IEEE TIP, 2011. 5
    Google ScholarLocate open access versionFindings
  • M. Grossberg and S. Nayar. What is the space of camera response functions? In CVPR, 2003. 5
    Google ScholarLocate open access versionFindings
  • Shuhang Gu, Lei Zhang, Wangmeng Zuo, and Xiangchu Feng. Weighted nuclear norm minimization with application to image denoising. In CVPR, 2014. 2
    Google ScholarLocate open access versionFindings
  • Shi Guo, Zifei Yan, Kai Zhang, Wangmeng Zuo, and Lei Zhang. Toward convolutional blind denoising of real photographs. In CVPR, 201, 3
    Google ScholarLocate open access versionFindings
  • Xiaojie Guo, Yu Li, and Haibin Ling. Lime: Low-light image enhancement via illumination map estimation. IEEE TIP, 2017. 2, 5, 6, 7, 8
    Google ScholarLocate open access versionFindings
  • Kaiming He, Jian Sun, and Xiaoou Tang. Guided image filtering. IEEE TPAMI, 2013. 5
    Google ScholarLocate open access versionFindings
  • Yuanming Hu, Hao He, Chenxi Xu, Baoyuan Wang, and Stephen Lin. Exposure: A white-box photo post-processing framework. In SIGGRAPH, 2018. 2
    Google ScholarLocate open access versionFindings
  • Sung Ju Hwang, Ashish Kapoor, and Sing Bing Kang. Context-based automatic local image enhancement. In ECCV, 2012. 2
    Google ScholarLocate open access versionFindings
  • Andrey Ignatov, Nikolay Kobyshev, Radu Timofte, Kenneth Vanhoey, and Luc Van Gool. DSLR-quality photos on mobile devices with deep convolutional networks. In ICCV, 2017. 1, 2, 5, 6, 7, 8
    Google ScholarLocate open access versionFindings
  • Liad Kaufman, Dani Lischinski, and Michael Werman. Content-aware automatic photo enhancement. Computer Graphics Forum, 2012. 2, 5, 6, 7, 8
    Google ScholarLocate open access versionFindings
  • P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. 5
    Findings
  • Idan Kligvasser, Tamar Rott Shaham, and Tomer Michaeli. xunit: Learning a spatial activation function for efficient image restoration. In CVPR, 2018. 1, 2, 7, 8
    Google ScholarLocate open access versionFindings
  • Alexander Krull, Tim-Oliver Buchholz, and Florian Jug. Noise2void - learning denoising from single noisy images. In CVPR, 2019. 3
    Google ScholarLocate open access versionFindings
  • Ann Lee, Kim Pedersen, and David Mumford. The complex statistics of high-contrast patches in natural images. SCTV, 2001. 1, 3
    Google ScholarLocate open access versionFindings
  • Jianwei Li, Xiaowu Chen, Dongqing Zou, Bo Gao, and Wei Teng. Conformal and low-rank sparse representation for image restoration. In ICCV, 2015. 2
    Google ScholarLocate open access versionFindings
  • Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In CVPR, 2017. 1
    Google ScholarLocate open access versionFindings
  • Ding Liu, Bihan Wen, Yuchen Fan, Chen Change Loy, and Thomas Huang. Non-local recurrent network for image restoration. In NeurIPS. 2018. 1, 2
    Google ScholarLocate open access versionFindings
  • Seonghyeon Nam, Youngbae Hwang, Yasuyuki Matsushita, and SeonJoo Kim. A holistic approach to cross-channel image noise modeling and its application to image denoising. In CVPR, 2016. 2
    Google ScholarLocate open access versionFindings
  • Jongchan Park, Joon-Young Lee, Donggeun Yoo, and In So Kweon. Distort-and-recover: Color enhancement using deep reinforcement learning. In CVPR, 2018. 1, 2
    Google ScholarLocate open access versionFindings
  • Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NeurIPS Workshop, 2017. 5
    Google ScholarLocate open access versionFindings
  • Stephen Pizer, E. Philip Amburn, John Austin, Robert Cromartie, Ari Geselowitz, Trey Greer, Bart Ter Haar Romeny, and John Zimmerman. Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 1987. 2
    Google ScholarLocate open access versionFindings
  • Tobias Plotz and Stefan Roth. Neural nearest neighbors networks. In NeurIPS. 2018. 1, 2
    Google ScholarFindings
  • Ramirez Rivera, Byungyong Ryu, and O Chae. Contentaware dark image enhancement through channel division. IEEE TIP, 2012. 2
    Google ScholarLocate open access versionFindings
  • Olaf Ronneberger, Philipp Fischer, and Thomas Brox. UNet: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. 4
    Google ScholarLocate open access versionFindings
  • Wang Ruixing, Zhang Qing, Fu Chiwing, Shen Xiaoyong, Zheng Weishi, and Jiaya Jia. Underexposed photo enhancement using deep illumination estimation. In CVPR, 2019. 1, 2, 5, 6, 7, 8
    Google ScholarLocate open access versionFindings
  • Jian Sun, Nan-Ning Zheng, Hai Tao, and Heung-Yeung Shum. Image hallucination with primal sketch priors. In CVPR, 2003. 1, 3
    Google ScholarLocate open access versionFindings
  • Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018. 3, 8
    Google ScholarLocate open access versionFindings
  • Jun Xu, Lei Zhang, David Zhang, and Xiangchu Feng. Multi-channel weighted nuclear norm minimization for real color image denoising. In ICCV, 2017. 2
    Google ScholarLocate open access versionFindings
  • Xiangyu Xu, Yongrui Ma, and Wenxiu Sun. Towards real scene super-resolution with raw images. In CVPR, 2019. 2
    Google ScholarLocate open access versionFindings
  • Xin Yang, Ke Xu, Shaozhe Chen, Shengfeng He, Baocai Yin Yin, and Rynson Lau. Active matting. 2018. 1
    Google ScholarFindings
  • Xin Yang, Ke Xu, Yibing Song, Qiang Zhang, Xiaopeng Wei, and Rynson Lau. Image correction via deep reciprocating HDR transformation. In CVPR, 2018. 1, 2, 5, 6, 7, 8
    Google ScholarLocate open access versionFindings
  • Zhenqiang Ying, Ge Li, Yurui Ren, Ronggang Wang, and Wenmin Wang. A new low-light image enhancement algorithm using camera response model. In ICCV Workshops, 2017. 2
    Google ScholarLocate open access versionFindings
  • Runsheng Yu, Wenyu Liu, Yasen Zhang, Zhi Qu, Deli Zhao, and Bo Zhang. Deepexposure: Learning to expose photos with asynchronously reinforced adversarial learning. In NeurIPS, 2018. 1, 2
    Google ScholarLocate open access versionFindings
  • Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE TIP, 2017. 1, 2
    Google ScholarLocate open access versionFindings
  • Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep CNN denoiser prior for image restoration. In CVPR, 2017. 1, 2
    Google ScholarLocate open access versionFindings
  • Qing Zhang, Ganzhao Yuan, Chunxia Xiao, Lei Zhu, and Wei-Shi Zheng. High-quality exposure correction of underexposed photos. In ACM MM, 2018. 1, 2
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments