Unshuffling Data for Improved Generalization

arxiv(2020)

引用 26|浏览117
暂无评分
摘要
The inability to generalize to test data outside the distribution of a training set is at the core of practical limits of machine learning. We show that mixing and shuffling training examples when training deep neural networks is not an optimal practice. On the opposite, partitioning the training data into non-i.i.d. subsets can guide the model to use reliable statistical patterns, while ignoring spurious correlations in the training data. We demonstrate multiple use cases where these subsets are built using unsupervised clustering, prior knowledge, or other meta-data available in existing datasets. The approach is supported by recent results on a causal view of generalization, it is simple to apply, and demonstrably improves generalization. Applied to the task of visual question answering, we obtain state-of-the-art performance on VQA-CP. We also show improvement over data augmentation using equivalent questions on GQA. Finally, we show a small improvement when training a model simultaneously on VQA v2 and Visual Genome, treating them as two distinct environments rather than one aggregated training set.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要