Pheme: Efficient and Conversational Speech Generation
CoRR(2024)
摘要
In recent years, speech generation has seen remarkable progress, now
achieving one-shot generation capability that is often virtually
indistinguishable from real human voice. Integrating such advancements in
speech generation with large language models might revolutionize a wide range
of applications. However, certain applications, such as assistive
conversational systems, require natural and conversational speech generation
tools that also operate efficiently in real time. Current state-of-the-art
models like VALL-E and SoundStorm, powered by hierarchical neural audio codecs,
require large neural components and extensive training data to work well. In
contrast, MQTTS aims to build more compact conversational TTS models while
capitalizing on smaller-scale real-life conversational speech data. However,
its autoregressive nature yields high inference latency and thus limits its
real-time usage. In order to mitigate the current limitations of the
state-of-the-art TTS models while capitalizing on their strengths, in this work
we introduce the Pheme model series that 1) offers compact yet high-performing
models, 2) allows for parallel speech generation of 3) natural conversational
speech, and 4) it can be trained efficiently on smaller-scale conversational
data, cutting data demands by more than 10x but still matching the quality of
the autoregressive TTS models. We also show that through simple teacher-student
distillation we can meet significant improvements in voice quality for
single-speaker setups on top of pretrained Pheme checkpoints, relying solely on
synthetic speech generated by much larger teacher models. Audio samples and
pretrained models are available online.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要