Domain Generalization with Global Sample Mixup.

ECCV Workshops (6)(2022)

Cited 0|Views2
No score
Abstract
Deep models have demonstrated outstanding ability in various computer vision tasks but are also notoriously known to generalize poorly when encountering unseen domains with different statistics. To alleviate this issue, in this technical report we present a new domain generalization method based on training sample mixup. The main enabling factor of our superior performance lies in the global mixup strategy across the source domains, where the batched samples from multiple graphic devices are mixed up for a better generalization ability. Since the domain gap in NICO datasets is mainly due to the intertwined background bias, the global mix strategy decreases such gap to a great extent by producing abundant mixed backgrounds. Besides, we have conducted extensive experiments on different backbones combined with various data augmentation to study the generalization performance of different model structures. Our final ensembled model achieved 74.07% on the test set and took the 3rd place according to the image classification accuracy (Acc.) in NICO Common Context Generalization Challenge 2022.
More
Translated text
Key words
generalization,domain,global,sample
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined