Contextual Spelling Correction with Large Language Models.

2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)(2023)

引用 0|浏览5
暂无评分
摘要
Contextual Spelling Correction (CSC) models are used to improve automatic speech recognition (ASR) quality given userspecific context. Typically, context is modeled as a large set of text spans to compare against a given ASR hypothesis using some distance measure (text, phonetic, or neural embedding). In this work we propose a CSC system based on a single Large Language Model (LLM) adapted with prompt tuning. Our approach is shown to be data efficient, and does not require dedicated serving. Our system exhibits advanced contextualization capabilities, such as support for phonetic spellings, cross-lingual scripts, and context specified as topics, with little to no data engineering. On voice assistant datasets, our system achieves $7.8 \%$ absolute word error rate reduction from a reference ASR system with relevant context and improving upon other contextualization solutions. Finally, we test our system in a prompt-injection attack scenario and report vulnerabilities and mitigations.
更多
查看译文
关键词
speech recognition,contextual adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要