StableIdentity: Inserting Anybody into Anywhere at First Sight
CoRR(2024)
摘要
Recent advances in large pretrained text-to-image models have shown
unprecedented capabilities for high-quality human-centric generation, however,
customizing face identity is still an intractable problem. Existing methods
cannot ensure stable identity preservation and flexible editability, even with
several images for each subject during training. In this work, we propose
StableIdentity, which allows identity-consistent recontextualization with just
one face image. More specifically, we employ a face encoder with an identity
prior to encode the input face, and then land the face representation into a
space with an editable prior, which is constructed from celeb names. By
incorporating identity prior and editability prior, the learned identity can be
injected anywhere with various contexts. In addition, we design a masked
two-phase diffusion loss to boost the pixel-level perception of the input face
and maintain the diversity of generation. Extensive experiments demonstrate our
method outperforms previous customization methods. In addition, the learned
identity can be flexibly combined with the off-the-shelf modules such as
ControlNet. Notably, to the best knowledge, we are the first to directly inject
the identity learned from a single image into video/3D generation without
finetuning. We believe that the proposed StableIdentity is an important step to
unify image, video, and 3D customized generation models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要