CosmicMan: A Text-to-Image Foundation Model for Humans
CVPR 2024(2024)
摘要
We present CosmicMan, a text-to-image foundation model specialized for
generating high-fidelity human images. Unlike current general-purpose
foundation models that are stuck in the dilemma of inferior quality and
text-image misalignment for humans, CosmicMan enables generating
photo-realistic human images with meticulous appearance, reasonable structure,
and precise text-image alignment with detailed dense descriptions. At the heart
of CosmicMan's success are the new reflections and perspectives on data and
models: (1) We found that data quality and a scalable data production flow are
essential for the final results from trained models. Hence, we propose a new
data production paradigm, Annotate Anyone, which serves as a perpetual data
flywheel to produce high-quality data with accurate yet cost-effective
annotations over time. Based on this, we constructed a large-scale dataset,
CosmicMan-HQ 1.0, with 6 Million high-quality real-world human images in a mean
resolution of 1488x1255, and attached with precise text annotations deriving
from 115 Million attributes in diverse granularities. (2) We argue that a
text-to-image foundation model specialized for humans must be pragmatic – easy
to integrate into down-streaming tasks while effective in producing
high-quality human images. Hence, we propose to model the relationship between
dense text descriptions and image pixels in a decomposed manner, and present
Decomposed-Attention-Refocusing (Daring) training framework. It seamlessly
decomposes the cross-attention features in existing text-to-image diffusion
model, and enforces attention refocusing without adding extra modules. Through
Daring, we show that explicitly discretizing continuous text space into several
basic groups that align with human body structure is the key to tackling the
misalignment problem in a breeze.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要