Learning Long-Term Invariant Features For Vision-Based Localization

Niluthpol C. Mithun,Cody Simons, Robert Casey, Stefan Hilligardt,Amit Roy-Chowdhury

2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018)(2018)

引用 2|浏览74
暂无评分
摘要
Constructing a feature representation invariant to certain types of geometric and photometric transformations is of significant importance in many computer vision applications. In spite of significant effort, developing invariant feature representations remains a challenging problem. Most of the existing representations often fail to satisfy the long-term repeatability requirements of specific applications like vision-based localization, applications whose domain includes significant, non-uniform illumination and environmental changes. To these ends, we explore the use of natural image pairs (i.e. images captured of the same location but at different times) as an additional source of supervision to generate an improved feature representation for the task of vision-based localization. Specifically, we resort to training deep denoising autoencoder, with CNN feature representation of one image in the pair being treated as a noisy version of the other. The resulting system thereby learns localization features which are both discriminative and invariant to illumination and environmental changes. In experiments tailored towards vision-based localization, features generated using the proposed method produced higher matching rates than state-of-the-art image features.
更多
查看译文
关键词
long-term invariant features,vision-based localization,geometric transformations,photometric transformations,computer vision applications,invariant feature representations,nonuniform illumination,environmental changes,natural image pairs,improved feature representation,deep denoising autoencoder,CNN feature representation,localization features,long-term repeatability requirements
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要