Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving
arxiv(2024)
摘要
Vision-centric perception systems for autonomous driving have gained
considerable attention recently due to their cost-effectiveness and
scalability, especially compared to LiDAR-based systems. However, these systems
often struggle in low-light conditions, potentially compromising their
performance and safety. To address this, our paper introduces LightDiff, a
domain-tailored framework designed to enhance the low-light image quality for
autonomous driving applications. Specifically, we employ a multi-condition
controlled diffusion model. LightDiff works without any human-collected paired
data, leveraging a dynamic data degradation process instead. It incorporates a
novel multi-condition adapter that adaptively controls the input weights from
different modalities, including depth maps, RGB images, and text captions, to
effectively illuminate dark scenes while maintaining context consistency.
Furthermore, to align the enhanced images with the detection model's knowledge,
LightDiff employs perception-specific scores as rewards to guide the diffusion
training process through reinforcement learning. Extensive experiments on the
nuScenes datasets demonstrate that LightDiff can significantly improve the
performance of several state-of-the-art 3D detectors in night-time conditions
while achieving high visual quality scores, highlighting its potential to
safeguard autonomous driving.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要