Explore until Confident: Efficient Exploration for Embodied Question Answering
arxiv(2024)
摘要
We consider the problem of Embodied Question Answering (EQA), which refers to
settings where an embodied agent such as a robot needs to actively explore an
environment to gather information until it is confident about the answer to a
question. In this work, we leverage the strong semantic reasoning capabilities
of large vision-language models (VLMs) to efficiently explore and answer such
questions. However, there are two main challenges when using VLMs in EQA: they
do not have an internal memory for mapping the scene to be able to plan how to
explore over time, and their confidence can be miscalibrated and can cause the
robot to prematurely stop exploration or over-explore. We propose a method that
first builds a semantic map of the scene based on depth information and via
visual prompting of a VLM - leveraging its vast knowledge of relevant regions
of the scene for exploration. Next, we use conformal prediction to calibrate
the VLM's question answering confidence, allowing the robot to know when to
stop exploration - leading to a more calibrated and efficient exploration
strategy. To test our framework in simulation, we also contribute a new EQA
dataset with diverse, realistic human-robot scenarios and scenes built upon the
Habitat-Matterport 3D Research Dataset (HM3D). Both simulated and real robot
experiments show our proposed approach improves the performance and efficiency
over baselines that do no leverage VLM for exploration or do not calibrate its
confidence. Webpage with experiment videos and code:
https://explore-eqa.github.io/
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要