Deferred NAM: Low-latency Top-K Context Injection via Deferred Context Encoding for Non-Streaming ASR
arxiv(2024)
摘要
Contextual biasing enables speech recognizers to transcribe important phrases
in the speaker's context, such as contact names, even if they are rare in, or
absent from, the training data. Attention-based biasing is a leading approach
which allows for full end-to-end cotraining of the recognizer and biasing
system and requires no separate inference-time components. Such biasers
typically consist of a context encoder; followed by a context filter which
narrows down the context to apply, improving per-step inference time; and,
finally, context application via cross attention. Though much work has gone
into optimizing per-frame performance, the context encoder is at least as
important: recognition cannot begin before context encoding ends. Here, we show
the lightweight phrase selection pass can be moved before context encoding,
resulting in a speedup of up to 16.1 times and enabling biasing to scale to 20K
phrases with a maximum pre-decoding delay under 33ms. With the addition of
phrase- and wordpiece-level cross-entropy losses, our technique also achieves
up to a 37.5
lightweight phrase selection pass.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要