Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action
CoRR(2023)
摘要
We present Unified-IO 2, the first autoregressive multimodal model that is
capable of understanding and generating image, text, audio, and action. To
unify different modalities, we tokenize inputs and outputs – images, text,
audio, action, bounding boxes, etc., into a shared semantic space and then
process them with a single encoder-decoder transformer model. Since training
with such diverse modalities is challenging, we propose various architectural
improvements to stabilize model training. We train our model from scratch on a
large multimodal pre-training corpus from diverse sources with a multimodal
mixture of denoisers objective. To learn an expansive set of skills, such as
following multimodal instructions, we construct and finetune on an ensemble of
120 datasets with prompts and augmentations. With a single unified model,
Unified-IO 2 achieves state-of-the-art performance on the GRIT benchmark and
strong results in more than 35 benchmarks, including image generation and
understanding, natural language understanding, video and audio understanding,
and robotic manipulation. We release all our models to the research community.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要