ELLA-V: Stable Neural Codec Language Modeling with Alignment-guided Sequence Reordering
CoRR(2024)
摘要
The language model (LM) approach based on acoustic and linguistic prompts,
such as VALL-E, has achieved remarkable progress in the field of zero-shot
audio generation. However, existing methods still have some limitations: 1)
repetitions, transpositions, and omissions in the output synthesized speech due
to limited alignment constraints between audio and phoneme tokens; 2)
challenges of fine-grained control over the synthesized speech with
autoregressive (AR) language model; 3) infinite silence generation due to the
nature of AR-based decoding, especially under the greedy strategy. To alleviate
these issues, we propose ELLA-V, a simple but efficient LM-based zero-shot
text-to-speech (TTS) framework, which enables fine-grained control over
synthesized audio at the phoneme level. The key to ELLA-V is interleaving
sequences of acoustic and phoneme tokens, where phoneme tokens appear ahead of
the corresponding acoustic tokens. The experimental findings reveal that our
model outperforms VALL-E in terms of accuracy and delivers more stable results
using both greedy and sampling-based decoding strategies. The code of ELLA-V
will be open-sourced after cleanups. Audio samples are available at
https://ereboas.github.io/ELLAV/.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要