Optimizing Bilingual Neural Transducer with Synthetic Code-switching Text Generation

Thien Nguyen, Nathalie Tran, Liuhui Deng, Thiago Fraga da Silva,Matthew Radzihovsky,Roger Hsiao, Henry Mason,Stefan Braun,Erik McDermott,Dogan Can,Pawel Swietojanski,Lyan Verwimp, Sibel Oyman, Tresi Arvizo,Honza Silovsky,Arnab Ghoshal, Mathieu Martel, Bharat Ram Ambati, Mohamed Ali

arxiv(2022)

引用 0|浏览19
暂无评分
摘要
Code-switching describes the practice of using more than one language in the same sentence. In this study, we investigate how to optimize a neural transducer based bilingual automatic speech recognition (ASR) model for code-switching speech. Focusing on the scenario where the ASR model is trained without supervised code-switching data, we found that semi-supervised training and synthetic code-switched data can improve the bilingual ASR system on code-switching speech. We analyze how each of the neural transducer's encoders contributes towards code-switching performance by measuring encoder-specific recall values, and evaluate our English/Mandarin system on the ASCEND data set. Our final system achieves 25% mixed error rate (MER) on the ASCEND English/Mandarin code-switching test set -- reducing the MER by 2.1% absolute compared to the previous literature -- while maintaining good accuracy on the monolingual test sets.
更多
查看译文
关键词
bilingual neural transducer,text generation,code-switching
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要