Modular Speech-to-Text Translation for Zero-Shot Cross-Modal Transfer

CoRR(2023)

引用 0|浏览27
暂无评分
摘要
Recent research has shown that independently trained encoders and decoders, combined through a shared fixed-size representation, can achieve competitive performance in speech-to-text translation. In this work, we show that this type of approach can be further improved with multilingual training. We observe significant improvements in zero-shot cross-modal speech translation, even outperforming a supervised approach based on XLSR for several languages.
更多
查看译文
关键词
transfer,translation,speech-to-text,zero-shot,cross-modal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络