VADRA: Visual Adversarial Domain Randomization and Augmentation.

arXiv: Computer Vision and Pattern Recognition(2018)

引用 23|浏览6
暂无评分
摘要
We address the issue of learning from synthetic domain randomized data effectively. While previous works have showcased domain randomization as an effective learning approach, it lacks in challenging the learner and wastes valuable compute on generating easy examples. This can be attributed to uniform randomization over the rendering parameter distribution. In this work, firstly we provide a theoretical perspective on characteristics of domain randomization and analyze its limitations. As a solution to these limitations, we propose a novel algorithm which closes the loop between the synthetic generative model and the learner in an adversarial fashion. Our framework easily extends to the scenario when there is unlabelled target data available, thus incorporating domain adaptation. We evaluate our method on diverse vision tasks using state-of-the-art simulators for public datasets like CLEVR, Syn2Real, and VIRAT, where we demonstrate that a learner trained using adversarial data generation performs better than using a random data generation strategy.
更多
查看译文
关键词
Test data generation,Domain (software engineering),Set (abstract data type),Object detection,Contextual image classification,Machine learning,Computer science,Space (commercial competition),Uniform distribution (continuous),Parameter space,Artificial intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要