Disentangling Correlated Speaker And Noise For Speech Synthesis Via Data Augmentation And Adversarial Factorization

2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)(2019)

引用 123|浏览218
暂无评分
摘要
To leverage crowd-sourced data to train multi-speaker text-to-speech ( TTS) models that can synthesize clean speech for all speakers, it is essential to learn disentangled representations which can independently control the speaker identity and background noise in generated signals. However, learning such representations can be challenging, due to the lack of labels describing the recording conditions of each training example, and the fact that speakers and recording conditions are often correlated, e. g. since users often make many recordings using the same equipment. This paper proposes three components to address this problem by: ( 1) formulating a conditional generative model with factorized latent variables, ( 2) using data augmentation to add noise that is not correlated with speaker identity and whose label is known during training, and ( 3) using adversarial factorization to improve disentanglement. Experimental results demonstrate that the proposed method can disentangle speaker and noise attributes even if they are correlated in the training data, and can be used to consistently synthesize clean speech for all speakers. Ablation studies verify the importance of each proposed component.
更多
查看译文
关键词
text-to-speech synthesis, variational autoencoder, adversarial training, data augmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要