Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation
CoRR(2024)
摘要
Recent advances in generative modeling have led to promising progress on
synthesizing 3D human motion from text, with methods that can generate
character animations from short prompts and specified durations. However, using
a single text prompt as input lacks the fine-grained control needed by
animators, such as composing multiple actions and defining precise durations
for parts of the motion. To address this, we introduce the new problem of
timeline control for text-driven motion synthesis, which provides an intuitive,
yet fine-grained, input interface for users. Instead of a single prompt, users
can specify a multi-track timeline of multiple prompts organized in temporal
intervals that may overlap. This enables specifying the exact timings of each
action and composing multiple actions in sequence or at overlapping intervals.
To generate composite animations from a multi-track timeline, we propose a new
test-time denoising method. This method can be integrated with any pre-trained
motion diffusion model to synthesize realistic motions that accurately reflect
the timeline. At every step of denoising, our method processes each timeline
interval (text prompt) individually, subsequently aggregating the predictions
with consideration for the specific body parts engaged in each action.
Experimental comparisons and ablations validate that our method produces
realistic motions that respect the semantics and timing of given text prompts.
Our code and models are publicly available at https://mathis.petrovich.fr/stmc.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要