LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry
CoRR(2024)
摘要
Visual odometry estimates the motion of a moving camera based on visual
input. Existing methods, mostly focusing on two-view point tracking, often
ignore the rich temporal context in the image sequence, thereby overlooking the
global motion patterns and providing no assessment of the full trajectory
reliability. These shortcomings hinder performance in scenarios with occlusion,
dynamic objects, and low-texture areas. To address these challenges, we present
the Long-term Effective Any Point Tracking (LEAP) module. LEAP innovatively
combines visual, inter-track, and temporal cues with mindfully selected anchors
for dynamic track estimation. Moreover, LEAP's temporal probabilistic
formulation integrates distribution updates into a learnable iterative
refinement module to reason about point-wise uncertainty. Based on these
traits, we develop LEAP-VO, a robust visual odometry system adept at handling
occlusions and dynamic scenes. Our mindful integration showcases a novel
practice by employing long-term point tracking as the front-end. Extensive
experiments demonstrate that the proposed pipeline significantly outperforms
existing baselines across various visual odometry benchmarks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要