Color Constancy Algorithm Using Ambient Light Sensor

Li Yuemin,Xu Haisong, Huang Yiming, Yang Minhang,Hu Bing, Zhang Yuntao

ACTA OPTICA SINICA(2023)

引用 0|浏览2
暂无评分
摘要
Objective Color constancy is a fundamental characteristic of human vision that refers to the ability of correcting color deviations caused by a difference in illumination. However, digital cameras cannot automatically remove the color cast of the illumination, and the color bias is adjusted by correcting the image with illuminant estimation, generally executed by color constancy algorithms. As an essential part of image signal processing, color constancy algorithms are critical for improving image quality and accuracy of computer vision tasks. Substantial efforts have been made to develop illuminant estimation methods, resulting in the proliferation of statistical- and learning-based algorithms. The existing color constancy algorithms usually allow one to obtain accurate and stable illuminant estimation on conventional scenes. However, unacceptable errors may often arise on the low color complexity scenes with monotonous content and uniformly colored large surfaces due to the lack of hints about the illuminant color. To address these problems, this study proposes a color constancy algorithm with ambient light sensors (ALS) to improve the accuracy of illuminant estimation in scenes with low color complexity. This approach leverages the fact that most intelligent terminals are equipped with ALS, and can enhance illuminant estimation accuracy by using ALS measurements alongside the image content. Methods The color constancy algorithm proposed in this study comprises two steps. The first step involves evaluating the reliability of the ALS measurement using a confidence assessment model, based on which the illuminant estimation is performed using the appropriate method. The reliability of the ALS is affected by the relative position of the ALS and the light source. Therefore, a bagging tree classifier is trained to serve as the confidence assessment model, with the posture of the camera, the color complexity of the image, and Duv (distance from the black body locus) of the estimated illuminant chromaticity as input parameters. Two illuminant estimation methods are designed for different levels of confidence. When the confidence of the ALS measurement is high, the illuminant estimation is performed by color space transformation from the ALS response to camera RGB via a second-order root polynomial model. This model is trained by minimizing the mean angular error of the training samples. Furthermore, if the ALS measurement has low confidence and the base algorithm has high confidence, illuminant estimation is performed by extracting neutral pixels using a mask determined by the ALS measurement and illuminant distribution characteristics based on the results of the existing neutral color extracting methods (Fig. 2). Finally, if both the ALS measurement and base algorithm have low confidence, the illuminant color is obtained by averaging the results of the two methods mentioned above. To evaluate the proposed ALS based color constancy algorithm (ALS-based CC), a dataset was collected using a Nikon D3X camera mounted with TCS3440 ALS. The dataset includes both conventional and low color complexity scenes from indoors and outdoors (Fig. 5), illuminated by light sources with a wide range of chromaticity (Fig. 4). In each image of the dataset, a classic color checker was positioned as a label, which was masked out during the evaluation. Results and Discussions The confidence assessment model of the ALS is trained and tested using 50 and 20 samples, respectively, collected using the aforementioned setup. It is demonstrated that the confidence assessment model correctly identifies all of the low confidence testing samples, but misjudges some of the high confidence ones (Table 2). The ALSbased CC, whose parameters were determined based on the performance evaluated by statistics of angular error, is executed with Grey Pixels (GP) as the base algorithm for neutral pixel extraction. The performance of ALS-based CC is compared with statistical-based counterparts using the established dataset. The results show that our proposed algorithm outperforms the counterparts in terms of the mean, tri-mean, and median of angular errors among the testing samples, indicating its overall high accuracy. Moreover, ALS-based CC achieves an angular error of less than 5 degrees on the mean of the worst 25% of angular errors, demonstrating its excellent stability even in challenging scenes (Table 3). In terms of the visualization of typical scenes, ALS-based CC accurately estimates the illuminant most of the time, resulting in processed images that are largely consistent with the ground truth. However, all the counterparts perform poorly on some of the scenes with large pure color surfaces, resulting in quality degradation in their corrected images due to significant color bias (Fig. 6). Furthermore, the operation time of ALS-based CC is reduced to 66% of GP on MATLAB 2021b, suggesting its potential for real-time illuminant estimation applications. Conclusions This study proposes a color constancy algorithm that integrates the ALS with the camera to improve illuminant estimation accuracy in scenes with low color complexity. The algorithm consists of a confidence assessment model for the ALS and two illuminant estimation methods based on color space transformation and neutral pixel extraction, designed for different confidence levels. Furthermore, a dataset with ALS measurement was established to evaluate the algorithm, and the results show that mean, median, and mean of worst 25% angular errors of the proposed method decrease by 32%, 21%, and 41%, respectively, compared with the existing most accurate method. The proposed algorithm also has a potential for real-time illuminant estimation in both conventional and low color complexity scenes.
更多
查看译文
关键词
vision optics,color constancy,illuminant color estimation,ambient light sensor,neutral pixel extraction,scene with low color complexity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要