MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models
CoRR(2024)
摘要
Recent advances in text-to-music generation models have opened new avenues in
musical creativity. However, music generation usually involves iterative
refinements, and how to edit the generated music remains a significant
challenge. This paper introduces a novel approach to the editing of music
generated by such models, enabling the modification of specific attributes,
such as genre, mood and instrument, while maintaining other aspects unchanged.
Our method transforms text editing to latent space manipulation while
adding an extra constraint to enforce consistency. It seamlessly integrates
with existing pretrained text-to-music diffusion models without requiring
additional training. Experimental results demonstrate superior performance over
both zero-shot and certain supervised baselines in style and timbre transfer
evaluations. Additionally, we showcase the practical applicability of our
approach in real-world music editing scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要