Federated Adversarial Domain Hallucination for Privacy-Preserving Domain Generalization

IEEE TRANSACTIONS ON MULTIMEDIA(2024)

引用 0|浏览8
暂无评分
摘要
Domain generalization aims to reduce the vulnerability of deep neural networks in the out-of-domain distribution scenario. With the recent and increasing data privacy concerns, federated domain generalization, where multiple domains are distributed on different local clients, has become an important research problem and brings new challenges for learning domain-invariant information from separated domains. In this paper, we address the problem of federated domain generalization from the perspective of domain hallucination. We propose a novel federated domain hallucination learning framework, with no additional data exchange between clients other than model weights, based on the idea that a domain hallucination with enlarged prediction uncertainty for the global model is more likely to transform the samples into an unseen domain. These types of desired domain hallucinations are achieved by generating samples that maximize the entropy of the global model and minimize the cross-entropy of the local model, where the latter loss is further introduced to maintain the sample semantics. By training the local models with the learned domain hallucinations, the final model is expected to be more robust to unseen domain shifts. We perform extensive experiments on three object classification benchmarks and one medical image segmentation benchmark. The proposed method outperforms state-of-the-art methods on all the benchmarks, demonstrating its effectiveness.
更多
查看译文
关键词
Domain shift,domain generalization,federated learning,privacy preserving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要