Vision-Language Foundation Models as Effective Robot Imitators
ICLR 2024(2023)
摘要
Recent progress in vision language foundation models has shown their ability
to understand multimodal data and resolve complicated vision language tasks,
including robotics manipulation. We seek a straightforward way of making use of
existing vision-language models (VLMs) with simple fine-tuning on robotics
data. To this end, we derive a simple and novel vision-language manipulation
framework, dubbed RoboFlamingo, built upon the open-source VLMs, OpenFlamingo.
Unlike prior works, RoboFlamingo utilizes pre-trained VLMs for single-step
vision-language comprehension, models sequential history information with an
explicit policy head, and is slightly fine-tuned by imitation learning only on
language-conditioned manipulation datasets. Such a decomposition provides
RoboFlamingo the flexibility for open-loop control and deployment on
low-performance platforms. By exceeding the state-of-the-art performance with a
large margin on the tested benchmark, we show RoboFlamingo can be an effective
and competitive alternative to adapt VLMs to robot control. Our extensive
experimental results also reveal several interesting conclusions regarding the
behavior of different pre-trained VLMs on manipulation tasks. We believe
RoboFlamingo has the potential to be a cost-effective and easy-to-use solution
for robotics manipulation, empowering everyone with the ability to fine-tune
their own robotics policy.
更多查看译文
关键词
Large Visual Language Model,Robotics,Imitation Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要