UniHDA: A Unified and Versatile Framework for Multi-Modal Hybrid Domain Adaptation
arxiv(2024)
摘要
Recently, generative domain adaptation has achieved remarkable progress,
enabling us to adapt a pre-trained generator to a new target domain. However,
existing methods simply adapt the generator to a single target domain and are
limited to a single modality, either text-driven or image-driven. Moreover,
they cannot maintain well consistency with the source domain, which impedes the
inheritance of the diversity. In this paper, we propose UniHDA, a
unified and versatile framework for generative hybrid domain
adaptation with multi-modal references from multiple domains. We use CLIP
encoder to project multi-modal references into a unified embedding space and
then linearly interpolate the direction vectors from multiple target domains to
achieve hybrid domain adaptation. To ensure consistency with the
source domain, we propose a novel cross-domain spatial structure (CSS) loss
that maintains detailed spatial structure information between source and target
generator. Experiments show that the adapted generator can synthesise realistic
images with various attribute compositions. Additionally, our framework is
generator-agnostic and versatile to multiple generators, e.g., StyleGAN, EG3D,
and Diffusion Models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要