Leveraging Image Matching Toward End-to-End Relative Camera Pose Regression
arxiv(2022)
摘要
This paper proposes a generalizable, end-to-end deep learning-based method
for relative pose regression between two images. Given two images of the same
scene captured from different viewpoints, our method predicts the relative
rotation and translation (including direction and scale) between the two
respective cameras. Inspired by the classical pipeline, our method leverages
Image Matching (IM) as a pre-trained task for relative pose regression.
Specifically, we use LoFTR, an architecture that utilizes an attention-based
network pre-trained on Scannet, to extract semi-dense feature maps, which are
then warped and fed into a pose regression network. Notably, we use a loss
function that utilizes separate terms to account for the translation direction
and scale. We believe such a separation is important because translation
direction is determined by point correspondences while the scale is inferred
from prior on shape sizes. Our ablations further support this choice. We
evaluate our method on several datasets and show that it outperforms previous
end-to-end methods. The method also generalizes well to unseen datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要