Multi-Stage Multi-Modal Pre-Training for Automatic Speech Recognition
arxiv(2024)
摘要
Recent advances in machine learning have demonstrated that multi-modal
pre-training can improve automatic speech recognition (ASR) performance
compared to randomly initialized models, even when models are fine-tuned on
uni-modal tasks. Existing multi-modal pre-training methods for the ASR task
have primarily focused on single-stage pre-training where a single unsupervised
task is used for pre-training followed by fine-tuning on the downstream task.
In this work, we introduce a novel method combining multi-modal and multi-task
unsupervised pre-training with a translation-based supervised mid-training
approach. We empirically demonstrate that such a multi-stage approach leads to
relative word error rate (WER) improvements of up to 38.45
both Librispeech and SUPERB. Additionally, we share several important findings
for choosing pre-training methods and datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要