A Visual-Inertial Navigation Coupled Localization Method Based on Adaptive Point-Line Feature Extraction

IEEE Sensors Journal(2023)

引用 0|浏览5
暂无评分
摘要
Autonomous localization and mapping in complex environments are prerequisites for the autonomous intelligence of unmanned aerial vehicles (UAVs). Current strategies to boost localization precision integrate various information sources, including inertial data, point features, and line features. While line features offer richer structural insights, their extraction and computation are extremely time-consuming, challenging the system’s real-time responsiveness. Addressing this, we propose a visual-inertial navigation coupled localization method based on adaptive point-line feature extraction (VIN-APL). This method features an adaptive extraction mechanism that can dynamically adjust the line feature extraction threshold in response to environmental texture variations and optimize the balance between localization accuracy and computation cost better. Experiment results show that VIN-APL achieves better localization accuracy than VIN system (VINS)-Mono in most of the tested sequences, especially the more difficult ones, with an average localization error reduction of 13.58%. Compared with PL-VINS, VIN-APL not only achieves superior accuracy across most sequences but also reduces the average computation time for line feature extraction nodes by 45.72%.
更多
查看译文
关键词
localization,visual-inertial,point-line
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要