Aligning Speech to Languages to Enhance Code-switching Speech Recognition
arxiv(2024)
摘要
Code-switching (CS) refers to the switching of languages within a speech
signal and results in language confusion for automatic speech recognition
(ASR). To address language confusion, we propose the language alignment loss
that performs frame-level language identification using pseudo language labels
learned from the ASR decoder. This eliminates the need for frame-level language
annotations. To further tackle the complex token alternatives for language
modeling in bilingual scenarios, we propose to employ large language models via
a generative error correction method. A linguistic hint that incorporates
language information (derived from the proposed language alignment loss and
decoded hypotheses) is introduced to guide the prompting of large language
models. The proposed methods are evaluated on the SEAME dataset and data from
the ASRU 2019 Mandarin-English code-switching speech recognition challenge. The
incorporation of the proposed language alignment loss demonstrates a higher
CS-ASR performance with only a negligible increase in the number of parameters
on both datasets compared to the baseline model. This work also highlights the
efficacy of language alignment loss in balancing primary-language-dominant
bilingual data during training, with an 8.6
dataset compared to the baseline model. Performance evaluation using large
language models reveals the advantage of the linguistic hint by achieving 14.1
and 5.5
respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要