One-Shot Voice Conversion Using Star-Gan

2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING(2020)

引用 19|浏览77
暂无评分
摘要
Our efforts are made on one-shot voice conversion where the target speaker is unseen in training dataset or both source and target speakers are unseen in the training dataset. In our work, StarGAN is employed to carry out voice conversion between speakers. An embedding vector is used to represent speaker ID. This work relies on two datasets in English and one dataset in Chinese, involving 38 speakers. A user study is conducted to validate our framework in terms of reconstruction quality and conversion quality. The results show that our framework is able to perform one-shot voice conversion and also outperforms state-of-the-art methods when the speaker in the test is seen in the training dataset. The exploration experiment demonstrates that our framework can be updated with incremental training when the data from new speakers is available.
更多
查看译文
关键词
voice conversion, generative adversarial networks, StarGAN, speech, embedding, neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要