Seeing Through Darkness: Visual Localization at Night via Weakly Supervised Learning of Domain Invariant Features

IEEE Transactions on Multimedia(2023)

引用 6|浏览18
暂无评分
摘要
Long term visual localization has to conquer the problem of matching images with dramatic photometric changes caused by different seasons, natural and man-made illumination changes, etc. Visual localization at night plays a vital role in many applications like autonomous driving and augmented reality, for which extracting keypoints and descriptors with robustness to day-night illumination changes has became the bottleneck. This paper proposes an adversarial learning based solution to harvest from the weakly domain labels of day and night images, along with the point level correspondences among day time images, to achieve robust local feature extraction and description across day-night images. The key idea is to learn a discriminator to distinguish whether a feature map is generated from the day or night images, and simultaneously to adjust the parameters of feature extraction network so as to fool the discriminator. After adversarial training of the discriminator and feature extraction network, the feature extraction network finally reaches a stable status so that the extracted feature maps are robust to day-night photometric changes, based on which day-night domain invariant keypoints and descriptors can be extracted. Compared to existing local feature learning methods, it only requires an additional set of easily captured night images to improve the domain invariance of learned features. Experiments on two challenging benchmarks show the effectiveness of proposed method. In addition, this paper revisits the widely used image matching metrics on HPatches and finds that recall of different methods is highly related to their relative localization performance.
更多
查看译文
关键词
Domain invariant local features, image matching, long-term visual localization, weakly supervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要