Metric Localization using Google Street View

2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2015)

引用 74|浏览158
暂无评分
摘要
Accurate metrical localization is one of the central challenges in mobile robotics. Many existing methods aim at localizing after building a map with the robot. In this paper, we present a novel approach that instead uses geotagged panoramas from the Google Street View as a source of global positioning. We model the problem of localization as a non-linear least squares estimation in two phases. The first estimates the 3D position of tracked feature points from short monocular camera sequences. The second computes the rigid body transformation between the Street View panoramas and the estimated points. The only input of this approach is a stream of monocular camera images and odometry estimates. We quantified the accuracy of the method by running the approach on a robotic platform in a parking lot by using visual fiducials as ground truth. Additionally, we applied the approach in the context of personal localization in a real urban scenario by using data from a Google Tango tablet.
更多
查看译文
关键词
Google Street View,metrical localization,Google Tango tablet,personal localization,visual fiducials,robotic platform,odometry estimates,monocular camera images,Street View panoramas,rigid body transformation,short monocular camera sequences,tracked feature points,3D position estimation,nonlinear least squares estimation,global positioning,geo-tagged panoramas,mobile robotics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要