Underwater Image Enhancement with a Deep Residual Framework

IEEE Access(2019)

引用 70|浏览24
暂无评分
摘要
Owing to refraction, absorption, and scattering of light by suspended particles in water, raw underwater images have low contrast, blurred details, and color distortion. These characteristics can significantly interfere with visual tasks, such as segmentation and tracking. This paper proposes an underwater image enhancement solution through a deep residual framework. First, the cycle-consistent adversarial networks (CycleGAN) is employed to generate synthetic underwater images as training data for convolution neural network models. Second, the very-deep super-resolution reconstruction model (VDSR) is introduced to underwater resolution applications; with it, the Underwater Resnet model is proposed, which is a residual learning model for underwater image enhancement tasks. Furthermore, the loss function and training mode are improved. A multi-term loss function is formed with mean squared error loss and a proposed edge difference loss. An asynchronous training mode is also proposed to improve the performance of the multi-term loss function. Finally, the impact of batch normalization is discussed. According to the underwater image enhancement experiments and a comparative analysis, the color correction and detail enhancement performance of the proposed methods are superior to that of previous deep learning models and traditional methods.
更多
查看译文
关键词
Asynchronous training,edge difference loss,residual learning,underwater image enhancement
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要