From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement

CVPR, pp. 3060-3069, 2020.

Cited by: 1|Views368
EI
Weibo:
We aim to create a novel semi-supervised learning method utilizing the knowledge of synthetic paired low/normal-light images and unpaired high-quality data for low-light image enhancement

Abstract:

Under-exposure introduces a series of visual degradation, i.e. decreased visibility, intensive noise, and biased color, etc. To address these problems, we propose a novel semi-supervised learning approach for low-light image enhancement. A deep recursive band network (DRBN) is proposed to recover a linear band representation of an enhance...More

Code:

Data:

0
Introduction
  • In low-light conditions, a series of visual degradations, i.e. low visibility, low contrast and intensive noises appear (a) Input (b) SICE [2]

    (c) EnligtenGAN [11]

    (d) DRBP in captured images.
  • There is insufficient light reaching camera sensors, causing the scene signals buried by system noise.
  • It would be helpful if longer exposure time is taken to suppress noise, which introduces blurriness.
  • The low-light enhancement methods at the software end are expected
  • It aims to restore an image captured in the low-light condition to a normal one, where visibility, contrast, and noise are expected to be improved, stretched, and suppressed, respectively.
  • The enhancement process leads to visual quality improvement and offers a good starting point for high-level computer vision tasks
Highlights
  • In low-light conditions, a series of visual degradations, i.e. low visibility, low contrast and intensive noises appear (a) Input (b) Single Image Contrast Enhancer [2]

    (c) EnligtenGAN [11]

    (d) DRBP in captured images
  • We perform quantitative evaluations to compare the performance of different methods
  • We aim to create a novel semi-supervised learning method utilizing the knowledge of synthetic paired low/normal-light images and unpaired high-quality data for low-light image enhancement
  • We create a two-stage network which restores the signal based on fidelity first and further enhances the results to improve overall visual quality
  • Both qualitative and quantitative evaluations demonstrate the advantages of the proposed method
Methods
  • The LOL real captured low/normal light images [30] are used for objective and subjective evaluations, since it is a real captured dataset including highly degraded images which most methods cannot achieve promising results.
  • The compared methods include Bio-Inspired Multi-Exposure Fusion (BIMEF) [31], Brightness Preserving Dynamic Histogram Equalization (BPDHE) [10], Camera Response Model (CRM) [33], Differential value Histogram Equalization Contrast Enhacement (DHECE) [22], Dong [6], Exposure Fusion Framework (EFF) [32], Contrast Limited Adaptive Histogram Equalization (CLAHE) [36], Low-Light Image Enhancement via Illumination Map Estimation (LIME) [9], Multiple Fusion (MF) [7], Multiscale Retinex (MR) [13] Joint Enhancement and Denoising Method (JED) [25], Refined Retinex Model (RRM) [18], Simultaneous Reflectance and Illumination Estimation (SRIE) [8], Deep Retinex Decomposition (DRD) [30], Deep Underexposed Photo Enhancement (DeepUPE) [27], Single Image Contrast Enhancer (SICE) [2], EnlightenGAN [11]
Results
  • The authors perform quantitative evaluations to compare the performance of different methods.
  • The authors Metric.
  • BIMEF [31] BPDHE [10] CRM [33] DHECE [22] Dong [6] EFF [32].
  • CLAHE [36] LIME [9].
  • 13.84 0.4254 0.5936 JED [25].
  • 19.65 0.6623 0.6968 RRM [18].
  • 14.64 0.4450 0.4521 SRIE [8]
Conclusion
  • The authors aim to create a novel semi-supervised learning method utilizing the knowledge of synthetic paired low/normal-light images and unpaired high-quality data for low-light image enhancement.
  • To this end, the authors create a two-stage network which restores the signal based on fidelity first and further enhances the results to improve overall visual quality.
  • Both qualitative and quantitative evaluations demonstrate the advantages of the proposed method
Summary
  • Introduction:

    In low-light conditions, a series of visual degradations, i.e. low visibility, low contrast and intensive noises appear (a) Input (b) SICE [2]

    (c) EnligtenGAN [11]

    (d) DRBP in captured images.
  • There is insufficient light reaching camera sensors, causing the scene signals buried by system noise.
  • It would be helpful if longer exposure time is taken to suppress noise, which introduces blurriness.
  • The low-light enhancement methods at the software end are expected
  • It aims to restore an image captured in the low-light condition to a normal one, where visibility, contrast, and noise are expected to be improved, stretched, and suppressed, respectively.
  • The enhancement process leads to visual quality improvement and offers a good starting point for high-level computer vision tasks
  • Objectives:

    The authors aim to create a novel semi-supervised learning method utilizing the knowledge of synthetic paired low/normal-light images and unpaired high-quality data for low-light image enhancement.
  • Methods:

    The LOL real captured low/normal light images [30] are used for objective and subjective evaluations, since it is a real captured dataset including highly degraded images which most methods cannot achieve promising results.
  • The compared methods include Bio-Inspired Multi-Exposure Fusion (BIMEF) [31], Brightness Preserving Dynamic Histogram Equalization (BPDHE) [10], Camera Response Model (CRM) [33], Differential value Histogram Equalization Contrast Enhacement (DHECE) [22], Dong [6], Exposure Fusion Framework (EFF) [32], Contrast Limited Adaptive Histogram Equalization (CLAHE) [36], Low-Light Image Enhancement via Illumination Map Estimation (LIME) [9], Multiple Fusion (MF) [7], Multiscale Retinex (MR) [13] Joint Enhancement and Denoising Method (JED) [25], Refined Retinex Model (RRM) [18], Simultaneous Reflectance and Illumination Estimation (SRIE) [8], Deep Retinex Decomposition (DRD) [30], Deep Underexposed Photo Enhancement (DeepUPE) [27], Single Image Contrast Enhancer (SICE) [2], EnlightenGAN [11]
  • Results:

    The authors perform quantitative evaluations to compare the performance of different methods.
  • The authors Metric.
  • BIMEF [31] BPDHE [10] CRM [33] DHECE [22] Dong [6] EFF [32].
  • CLAHE [36] LIME [9].
  • 13.84 0.4254 0.5936 JED [25].
  • 19.65 0.6623 0.6968 RRM [18].
  • 14.64 0.4450 0.4521 SRIE [8]
  • Conclusion:

    The authors aim to create a novel semi-supervised learning method utilizing the knowledge of synthetic paired low/normal-light images and unpaired high-quality data for low-light image enhancement.
  • To this end, the authors create a two-stage network which restores the signal based on fidelity first and further enhances the results to improve overall visual quality.
  • Both qualitative and quantitative evaluations demonstrate the advantages of the proposed method
Tables
  • Table1: Quantitative results on real test images in LOL-Real dataset. EG denotes EnlightenGAN
Download tables as Excel
Related work
  • The earliest low-light enhancement methods adjust the illumination uniformly, which easily causes overexposure and under-exposure, such as Histogram equalization (HE) [23, 1]. Without local adaptation, the enhancement leads to undesirable illumination and intensive noise. Some methods [17, 35] enhance the visibility by applying dehazing methods to the inverted low-light images. In these methods, the off-line denoising operation [5] is applied to suppress noise, which sometimes also leads to detail blurriness.

    Later on, Retinex-based methods [15] perform the joint illumination adjustment and noise suppression via decomposing the image into illumination and reflectance layers and adjusting them adaptively. Various priors, e.g. structure aware prior [9], weighted variation [8], and multiple derivatives of illumination [7] are utilized to guide manipulation of these two layers. Variants of Retinex models, e.g. single-scale Retinex [14], multi-scale Retinex [12], naturalness Retinex [28], and robust Retinex [18, 25] are developed to facilitate low-light image enhancement. These
Funding
  • This work is partially supported by the Hong Kong ITF UICP under Grant 9440203, in part by National Natural Science Foundation of China under contract No.61772043, in part by Beijing Natural Science Foundation under contract No.L182002, and in part by the National Key R&D Program of China under Grand No.2018AAA0102700
Reference
  • M. Abdullah-Al-Wadud, M. H. Kabir, M. A. Akber Dewan, and O. Chae. A dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(2):593–600, May 2007. 2
    Google ScholarLocate open access versionFindings
  • J. Cai, S. Gu, and L. Zhang. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. on Image Processing, 27(4):2049–2062, April 2018. 1, 2, 5, 6
    Google ScholarLocate open access versionFindings
  • Jianrui Cai, Shuhang Gu, and Lei Zhang. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. on Image Processing, 27(4):2049–2062, April 2018. 3
    Google ScholarLocate open access versionFindings
  • Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2018. 2, 3
    Google ScholarLocate open access versionFindings
  • K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. on Image Processing, 16(8):2080–2095, Aug 2007. 2
    Google ScholarLocate open access versionFindings
  • Xuan Dong, Guan Wang, Yi Pang, Weixin Li, Jiangtao Wen, Wei Meng, and Yao Lu. Fast efficient algorithm for enhancement of low lighting video. In Proc. IEEE Int’l Conf. Multimedia and Expo, pages 1–6, 2011. 5, 6
    Google ScholarLocate open access versionFindings
  • Xueyang Fu, Delu Zeng, Yue Huang, Yinghao Liao, Xinghao Ding, and John Paisley. A fusion-based enhancing method for weakly illuminated images. Signal Processing, 129:82 – 96, 2016. 2, 5, 6
    Google ScholarLocate open access versionFindings
  • X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding. A weighted variational model for simultaneous reflectance and illumination estimation. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pages 2782–2790, June 2016. 2, 5, 6
    Google ScholarLocate open access versionFindings
  • X. Guo, Y. Li, and H. Ling. Lime: Low-light image enhancement via illumination map estimation. IEEE Trans. on Image Processing, 26(2):982–993, Feb 2017. 2, 5, 6
    Google ScholarLocate open access versionFindings
  • Haidi Ibrahim and Nicholas Sia Pik Kong. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics, 53(4):1752–1758, 2007. 5, 6
    Google ScholarLocate open access versionFindings
  • Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang. Enlightengan: Deep light enhancement without paired supervision. arXiv preprint arXiv:1906.06972, 2019. 1, 2, 5, 6
    Findings
  • D. J. Jobson, Z. Rahman, and G. A. Woodell. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. on Image Processing, 6(7):965–976, Jul 1997. 2
    Google ScholarLocate open access versionFindings
  • D. J. Jobson, Z. Rahman, and G. A. Woodell. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. on Image Processing, 6(7):965–976, July 1997. 5, 6
    Google ScholarLocate open access versionFindings
  • D. J. Jobson, Z. Rahman, and G. A. Woodell. Properties and performance of a center/surround retinex. IEEE Trans. on Image Processing, 6(3):451–462, Mar 1997. 2
    Google ScholarLocate open access versionFindings
  • Edwin H. Land. The retinex theory of color vision. Sci. Amer, pages 108–128, 1977. 2
    Google ScholarLocate open access versionFindings
  • C. Lee, C. Lee, and C. S. Kim. Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans. on Image Processing, 22(12):5372–5384, Dec 2013. 5
    Google ScholarLocate open access versionFindings
  • L. Li, R. Wang, W. Wang, and W. Gao. A low-light image enhancement method for both denoising and contrast enlarging. In Proc. IEEE Int’l Conf. Image Processing, pages 3730–3734, Sept 2015. 2
    Google ScholarLocate open access versionFindings
  • M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo. Structurerevealing low-light image enhancement via robust retinex model. IEEE Trans. on Image Processing, 27(6):2828–2841, June 202, 5, 6
    Google ScholarLocate open access versionFindings
  • Yuen Peng Loh and Chee Seng Chan. Getting to know lowlight images with the exclusively dark dataset. Computer Vision and Image Understanding, 178:30 – 42, 203
    Google ScholarLocate open access versionFindings
  • Kin Gwn Lore, Adedotun Akintayo, and Soumik Sarkar. Llnet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 61:650 – 662, 2017. 2, 3
    Google ScholarLocate open access versionFindings
  • N. Murray, L. Marchesotti, and F. Perronnin. Ava: A largescale database for aesthetic visual analysis. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pages 2408–2415, June 2012. 5
    Google ScholarLocate open access versionFindings
  • Keita Nakai, Yoshikatsu Hoshi, and Akira Taguchi. Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In International Symposium on Intelligent Signal Processing and Communications Systems, pages 445–449, 2013. 5, 6
    Google ScholarLocate open access versionFindings
  • S. M. Pizer, R. E. Johnston, J. P. Ericksen, B. C. Yankaskas, and K. E. Muller. Contrast-limited adaptive histogram equalization: speed and effectiveness. In Proceedings of Conference on Visualization in Biomedical Computing, pages 337– 345, May 1990. 2
    Google ScholarLocate open access versionFindings
  • W. Ren, S. Liu, L. Ma, Q. Xu, X. Xu, X. Cao, J. Du, and M. Yang. Low-light image enhancement via a deep hybrid network. IEEE Trans. on Image Processing, 28(9):4364– 4375, Sep. 2019. 2, 3
    Google ScholarLocate open access versionFindings
  • X. Ren, M. Li, W. Cheng, and J. Liu. Joint enhancement and denoising method via sequential decomposition. In IEEE Int’l Symposium on Circuits and Systems, pages 1–5, May 2018. 2, 5, 6
    Google ScholarLocate open access versionFindings
  • L. Shen, Z. Yue, F. Feng, Q. Chen, S. Liu, and J. Ma. MSR-net:Low-light Image Enhancement Using Deep Convolutional Network. ArXiv e-prints, November 2017. 3
    Google ScholarFindings
  • Ruixing Wang, Qing Zhang, Chi-Wing Fu, Xiaoyong Shen, Wei-Shi Zheng, and Jiaya Jia. Underexposed photo enhancement using deep illumination estimation. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, June 2019. 2, 3, 5, 6
    Google ScholarLocate open access versionFindings
  • S. Wang, J. Zheng, H. M. Hu, and B. Li. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. on Image Processing, 22(9):3538– 3548, Sept 2013. 2, 5
    Google ScholarLocate open access versionFindings
  • Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. on Image Processing, 13(4):600– 612, April 2004. 6
    Google ScholarLocate open access versionFindings
  • Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. Deep retinex decomposition for low-light enhancement. In British Machine Vision Conference, Sept 2018. 1, 3, 5, 6
    Google ScholarLocate open access versionFindings
  • Z. Ying, G. Li, and W. Gao. A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement. ArXiv e-prints, November 2017. 5, 6
    Google ScholarFindings
  • Zhenqiang Ying, Ge Li, Yurui Ren, Ronggang Wang, and Wenmin Wang. A new image contrast enhancement algorithm using exposure fusion framework. In International Conference on Computer Analysis of Images and Patterns, pages 36–46. Springer, 2017. 5, 6
    Google ScholarLocate open access versionFindings
  • Zhenqiang Ying, Ge Li, Yurui Ren, Ronggang Wang, and Wenmin Wang. A new low-light image enhancement algorithm using camera response model. In Proc. IEEE Int’l Conf. Computer Vision, Oct 2017. 5, 6
    Google ScholarLocate open access versionFindings
  • Ye Yuan, Wenhan Yang, Wenqi Ren, Jiaying Liu, Walter J. Scheirer, and Zhangyang Wang. UG2+ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments. arXiv e-prints, page arXiv:1904.04474, Apr 2019. 3
    Findings
  • X. Zhang, P. Shen, L. Luo, L. Zhang, and J. Song. Enhancement and noise reduction of very low light level images. In Proc. IEEE Int’l Conf. Pattern Recognition, pages 2034–2037, Nov 2012. 2
    Google ScholarLocate open access versionFindings
  • Karel Zuiderveld. Graphics gems iv. chapter Contrast Limited Adaptive Histogram Equalization, pages 474–485. Academic Press Professional, Inc., San Diego, CA, USA, 1994. 5, 6
    Google ScholarFindings
Your rating :
0

 

Tags
Comments