Deep Radio-Visual Localization

2018 IEEE Winter Conference on Applications of Computer Vision (WACV)(2018)

引用 7|浏览26
暂无评分
摘要
For many automated navigation applications, the underlying localization algorithm must be able to continuously produce both accurate and stable results by using a spectrum of redundant sensing technologies. To this end, various sensors have been used for localization, such as Wi-Fi, Bluetooth, GPS, LiDAR and cameras. In particular, a class of vision-based localization techniques using Structure from Motion (SfM) has been shown to produce very accurate position estimates in the real-world with moderate assumptions about the motion of the camera and the amount of visual texture in the environment. However, when these assumptions are violated, SfM techniques can fail catastrophically (i.e., cannot generate any estimate). Recently, a deep convolutional neural network (CNN) has been applied to images to robustly regress 6-DOF camera poses at the cost of lower accuracy than SfM. In this work, we propose improving image-based localization accuracy of deep CNN by combining Bluetooth radio-wave signal readings. In our experiments, we show that our proposed dual-stream CNN can robustly regress 6-DOF poses from images and radiowave signals better than one sensing modality alone. More importantly, we show that when both modes are used, the localization accuracy of the proposed deep CNN is comparable to that of SfM and significantly more robust than SfM.
更多
查看译文
关键词
deep radio-visual localization,automated navigation applications,redundant sensing technologies,Wi-Fi,cameras,vision-based localization techniques,visual texture,SfM techniques,deep convolutional neural network,localization accuracy,deep CNN,Bluetooth radio-wave signal readings,dual-stream CNN,sensing modality,position estimates,GPS,LiDAR,structure-from-motion,6-DOF camera poses
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要