ONE-SHOT VOICE CONVERSION BASED ON SPEAKER AWARE MODULE

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)(2021)

引用 8|浏览28
暂无评分
摘要
Voice conversion (VC) is a task to convert the voice of speech while preserving its linguistic content. Although several methods have been proposed to enable VC with non-parallel data, it is still difficult to model the voice without a great number of data or an adaptive process. In this paper, we propose a speaker-aware voice conversion (SAVC) system realizing one-shot voice conversion without an adaptation stage. The SAVC utilizes a speaker aware module (SAM) to disentangle speaker embeddings. The SAM comprises a dynamic reference encoder, a static speaker knowledge block (SKB), and a multi-head attention layer. The reference encoder is used to compress a variable-length utterance to a fixed-length vector, the SKB is made up of pre-extraction x-vectors, and the multi-head attention layer is designed to generate weighted combined speaker embeddings. Subsequently, phonetic posteriorgrams (PPGs) as context encoding are concatenated with speaker embeddings and sent to the decoder module for generating acoustic features. Experimental results on the Aishell-1 corpus show that the proposed method can improve speaker similarity and converted utterances' speech quality.
更多
查看译文
关键词
speaker aware voice conversion, one-shot, phonetic posteriorgrams, x-vector
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要