Cross-Lingual Vision-Language Navigation

Yan An,Wang Xin,Feng Jiangtao,Li Lei, Wang William Yang

arxiv(2019)

引用 4|浏览63
暂无评分
摘要
Vision-Language Navigation (VLN) is the task where an agent is commanded to navigate in photo-realistic environments with natural language instructions. Previous research on VLN is primarily conducted on the Room-to-Room (R2R) dataset with only English instructions. The ultimate goal of VLN, however, is to serve people speaking arbitrary languages. To do this, we collect a cross-lingual R2R dataset, extending the original benchmark with corresponding Chinese instructions. But it is impractical to collect human-annotated instructions for every existing language. Based on the newly introduced dataset, we propose a general cross-lingual VLN framework to enable instruction-following navigation for different languages. We first explore the possibility of building a cross-lingual agent when no training data of the target language is available. The cross-lingual agent is equipped with a meta-learner to aggregate cross-lingual representations and with a visually grounded cross-lingual alignment module to align textual representations of different languages. Under the zero-shot learning scenario, our model shows competitive results even compared to a model trained with all target language instructions. Besides, we introduce an adversarial domain adaption loss to improve the transferring ability of our model when given a certain amount of target language data. Our dataset and methods demonstrate potentials of building scalable cross-lingual agents to serve speakers with different languages.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要