Fast Uncertainty Quantification for Deep Object Pose Estimation

2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021)(2021)

引用 20|浏览86
暂无评分
摘要
Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer. Efficient and robust uncertainty quantification (UQ) in pose estimators is critically needed in many robotic tasks. In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation. We ensemble 2–3 pre-trained models with different neural network architectures and/or training data sources, and compute their average pair-wise disagreement against one another to obtain the uncertainty quantification. We propose four disagreement metrics, including a learned metric, and show that the average distance (ADD) is the best learning-free metric and it is only slightly worse than the learned metric, which requires labeled target data. Our method has several advantages compared to the prior art: 1) our method does not require any modification of the training process or the model inputs; and 2) it needs only one forward pass for each model. We evaluate the proposed UQ method on three tasks where our uncertainty quantification yields much stronger correlations with pose estimation errors than the baselines. Moreover, in a real robot grasping task, our method increases the grasping success rate from 35% to 90%. Video and code are available at https://sites.google.com/view/fastuq.
更多
查看译文
关键词
robot grasping task,deep object pose estimation,robotic tasks,6-DoF object pose estimation,neural network architectures,training data sources,average pair-wise disagreement,average distance,labeled target data,UQ method,uncertainty quantification yields,learning-free metric,deep learning-based object pose estimators,uncertainty quantification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要