MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation
arxiv(2024)
摘要
In this paper, we present MoMA: an open-vocabulary, training-free
personalized image model that boasts flexible zero-shot capabilities. As
foundational text-to-image models rapidly evolve, the demand for robust
image-to-image translation grows. Addressing this need, MoMA specializes in
subject-driven personalized image generation. Utilizing an open-source,
Multimodal Large Language Model (MLLM), we train MoMA to serve a dual role as
both a feature extractor and a generator. This approach effectively synergizes
reference image and text prompt information to produce valuable image features,
facilitating an image diffusion model. To better leverage the generated
features, we further introduce a novel self-attention shortcut method that
efficiently transfers image features to an image diffusion model, improving the
resemblance of the target object in generated images. Remarkably, as a
tuning-free plug-and-play module, our model requires only a single reference
image and outperforms existing methods in generating images with high detail
fidelity, enhanced identity-preservation and prompt faithfulness. Our work is
open-source, thereby providing universal access to these advancements.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要