LoFTR: Detector-Free Local Feature Matching with Transformers

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021(2021)

引用 931|浏览510
暂无评分
摘要
We present a novel method for local image feature matching. Instead of performing image feature detection, description, and matching sequentially, we propose to first establish pixel-wise dense matches at a coarse level and later refine the good matches at a fine level. In contrast to dense methods that use a cost volume to search correspondences, we use self and cross attention layers in Transformer to obtain feature descriptors that are conditioned on both images. The global receptive field provided by Transformer enables our method to produce dense matches in low-texture areas, where feature detectors usually struggle to produce repeatable interest points. The experiments on indoor and outdoor datasets show that LoFTR outperforms state-of-the-art methods by a large margin. LoFTR also ranks first on two public benchmarks of visual localization among the published methods. Code is available at our project page: https://zju3dv.github. io/loftr/.
更多
查看译文
关键词
detector-free local feature,transformers,local image feature,image feature detection,pixel-wise dense matches,coarse level,good matches,fine level,dense methods,cost volume,search correspondences,cross attention layers,Transformer,feature descriptors,global receptive field,low-texture areas,feature detectors usually struggle,repeatable interest points,visual localization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要