M3IL: Multi-Modal Meta-Imitation Learning

Transactions of the Japanese Society for Artificial Intelligence(2023)

引用 0|浏览15
暂无评分
摘要
Imitation Learning (IL) is anticipated to achieve intelligent robots since it allows the user to teach the various robot tasks easily.In particular, Few-Shot Imitation Learning (FSIL) aims to infer and adapt fast to unseen tasks with a small amount of data. Though FSIL requires few-shot of data, the high cost of demonstrations in IL is still a critical problem. Especially when we want to teach the robot a new task, we need to execute the task for the assignment every time. Inspired by the fact that humans specify tasks using language instructions without executing them, we propose a multi-modal FSIL setting in this work. The model leverages image and language information in the training phase and utilizes both image and language or only language information in the testing phase. We also propose a Multi-Modal Meta-Imitation Learning or M3IL, which can infer with only image or language information. The result of M3IL outperforms the baseline in the standard and proposed settings. Our result shows the effectiveness of M3IL and the importance of language instructions in the FSIL setting.
更多
查看译文
关键词
learning,multi-modal,meta-imitation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要