SimpleSpeech: Towards Simple and Efficient Text-to-Speech with Scalar Latent Transformer Diffusion Models
arxiv(2024)
摘要
In this study, we propose a simple and efficient Non-Autoregressive (NAR)
text-to-speech (TTS) system based on diffusion, named SimpleSpeech. Its
simpleness shows in three aspects: (1) It can be trained on the speech-only
dataset, without any alignment information; (2) It directly takes plain text as
input and generates speech through an NAR way; (3) It tries to model speech in
a finite and compact latent space, which alleviates the modeling difficulty of
diffusion. More specifically, we propose a novel speech codec model (SQ-Codec)
with scalar quantization, SQ-Codec effectively maps the complex speech signal
into a finite and compact latent space, named scalar latent space. Benefits
from SQ-Codec, we apply a novel transformer diffusion model in the scalar
latent space of SQ-Codec. We train SimpleSpeech on 4k hours of a speech-only
dataset, it shows natural prosody and voice cloning ability. Compared with
previous large-scale TTS models, it presents significant speech quality and
generation speed improvement. Demos are released.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要