DiaLoc: An Iterative Approach to Embodied Dialog Localization
CVPR 2024(2024)
摘要
Multimodal learning has advanced the performance for many vision-language
tasks. However, most existing works in embodied dialog research focus on
navigation and leave the localization task understudied. The few existing
dialog-based localization approaches assume the availability of entire dialog
prior to localizaiton, which is impractical for deployed dialog-based
localization. In this paper, we propose DiaLoc, a new dialog-based localization
framework which aligns with a real human operator behavior. Specifically, we
produce an iterative refinement of location predictions which can visualize
current pose believes after each dialog turn. DiaLoc effectively utilizes the
multimodal data for multi-shot localization, where a fusion encoder fuses
vision and dialog information iteratively. We achieve state-of-the-art results
on embodied dialog-based localization task, in single-shot (+7.08
Acc5@valUnseen) and multi- shot settings (+10.85
narrows the gap between simulation and real-world applications, opening doors
for future research on collaborative localization and navigation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要