Meta-Sim: Learning to Generate Synthetic Datasets

2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)(2019)

引用 246|浏览5
暂无评分
摘要
Training models to high-end performance requires availability of large labeled datasets, which are expensive to get. The goal of our work is to automatically synthesize labeled datasets that are relevant for a downstream task. We propose Meta-Sim, which learns a generative model of synthetic scenes, and obtain images as well as its corresponding ground-truth via a graphics engine. We parametrize our dataset generator with a neural network, which learns to modify attributes of scene graphs obtained from probabilistic scene grammars, so as to minimize the distribution gap between its rendered outputs and target data. If the real dataset comes with a small labeled validation set, we additionally aim to optimize a meta-objective, i.e. downstream task performance. Experiments show that the proposed method can greatly improve content generation quality over a human-engineered probabilistic scene grammar, both qualitatively and quantitatively as measured by performance on a downstream task.
更多
查看译文
关键词
meta-objective,downstream task performance,content generation quality,human-engineered probabilistic scene grammar,Meta-Sim,generate synthetic datasets,training models,high-end performance,labeled datasets,generative model,synthetic scenes,corresponding ground-truth,graphics engine,dataset generator,scene graphs,probabilistic scene grammars,labeled validation set
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要