PLGSLAM: Progressive Neural Scene Represenation with Local to Global Bundle Adjustment
CoRR(2023)
摘要
Neural implicit scene representations have recently shown encouraging results
in dense visual SLAM. However, existing methods produce low-quality scene
reconstruction and low-accuracy localization performance when scaling up to
large indoor scenes and long sequences. These limitations are mainly due to
their single, global radiance field with finite capacity, which does not adapt
to large scenarios. Their end-to-end pose networks are also not robust enough
with the growth of cumulative errors in large scenes. To this end, we introduce
PLGSLAM, a neural visual SLAM system capable of high-fidelity surface
reconstruction and robust camera tracking in real-time. To handle large-scale
indoor scenes, PLGSLAM proposes a progressive scene representation method which
dynamically allocates new local scene representation trained with frames within
a local sliding window. This allows us to scale up to larger indoor scenes and
improves robustness (even under pose drifts). In local scene representation,
PLGSLAM utilizes tri-planes for local high-frequency features with multi-layer
perceptron (MLP) networks for the low-frequency feature, achieving smoothness
and scene completion in unobserved areas. Moreover, we propose local-to-global
bundle adjustment method with a global keyframe database to address the
increased pose drifts on long sequences. Experimental results demonstrate that
PLGSLAM achieves state-of-the-art scene reconstruction results and tracking
performance across various datasets and scenarios (both in small and
large-scale indoor environments).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要