Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
NeurIPS(2023)
摘要
We present the largest and most comprehensive empirical study of pre-trained
visual representations (PVRs) or visual 'foundation models' for Embodied AI.
First, we curate CortexBench, consisting of 17 different tasks spanning
locomotion, navigation, dexterous, and mobile manipulation. Next, we
systematically evaluate existing PVRs and find that none are universally
dominant. To study the effect of pre-training data size and diversity, we
combine over 4,000 hours of egocentric videos from 7 different sources (over
4.3M images) and ImageNet to train different-sized vision transformers using
Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from
prior work, we find that scaling dataset size and diversity does not improve
performance universally (but does so on average). Our largest model, named
VC-1, outperforms all prior PVRs on average but does not universally dominate
either. Next, we show that task- or domain-specific adaptation of VC-1 leads to
substantial gains, with VC-1 (adapted) achieving competitive or superior
performance than the best known results on all of the benchmarks in
CortexBench. Finally, we present real-world hardware experiments, in which VC-1
and VC-1 (adapted) outperform the strongest pre-existing PVR. Overall, this
paper presents no new techniques but a rigorous systematic evaluation, a broad
set of findings about PVRs (that in some cases, refute those made in narrow
domains in prior work), and open-sourced code and models (that required over
10,000 GPU-hours to train) for the benefit of the research community.
更多查看译文
关键词
artificial visual cortex,embodied
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要