Vision and Inertial Sensor Fusion for Terrain Relative Navigation

Andrew Verras,Roshan Thomas Eapen, Andrew B. Simon,Manoranjan Majji,Ramchander Rao Bhaskara, Carolina I. Restrepo,Ronney Lovelace

AIAA Scitech 2021 Forum(2021)

引用 2|浏览0
暂无评分
摘要
Mathematics and methods of integrating camera measurements with inertial sensors for terrain relative navigation of a space vehicle are discussed. Pinhole camera model of the vision sensors, in conjunction with measurement models of typical inertial sensors are used to derive a position and attitude fix for the navigation state of the space vehicle. An ancillary frame initialization process that exploits the three dimensional translational motion geometry of the space vehicle to derive uncertain estimates of the feature locations is derived. Linear covariance analysis is carried out to derive the conditional state uncertainties of the feature locations that are utilized by the filter in a second pass. Approaches for state estimation are tested using data obtained from a high-fidelity rendering engine developed by the team. Experimental data obtained from a medium-fidelity terrain relative navigation emulation test-bed called Navigation, Estimation, and Sensing Testbed (NEST) is utilized to demonstrate the utility of the filter formulations developed here-in.
更多
查看译文
关键词
inertial sensor fusion,vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要