MobileSpeech: A Fast and High-Fidelity Framework for Mobile Zero-Shot Text-to-Speech
CoRR(2024)
摘要
Zero-shot text-to-speech (TTS) has gained significant attention due to its
powerful voice cloning capabilities, requiring only a few seconds of unseen
speaker voice prompts. However, all previous work has been developed for
cloud-based systems. Taking autoregressive models as an example, although these
approaches achieve high-fidelity voice cloning, they fall short in terms of
inference speed, model size, and robustness. Therefore, we propose
MobileSpeech, which is a fast, lightweight, and robust zero-shot text-to-speech
system based on mobile devices for the first time. Specifically: 1) leveraging
discrete codec, we design a parallel speech mask decoder module called SMD,
which incorporates hierarchical information from the speech codec and weight
mechanisms across different codec layers during the generation process.
Moreover, to bridge the gap between text and speech, we introduce a high-level
probabilistic mask that simulates the progression of information flow from less
to more during speech generation. 2) For speaker prompts, we extract
fine-grained prompt duration from the prompt speech and incorporate text,
prompt speech by cross attention in SMD. We demonstrate the effectiveness of
MobileSpeech on multilingual datasets at different levels, achieving
state-of-the-art results in terms of generating speed and speech quality.
MobileSpeech achieves RTF of 0.09 on a single A100 GPU and we have successfully
deployed MobileSpeech on mobile devices. Audio samples are available at
.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要