A hierarchical language model based on variable-length class sequences: the MC/sub n//sup /spl nu// approach

IEEE Transactions on Speech and Audio Processing(2002)

引用 3|浏览7
暂无评分
摘要
We propose a new language model which represents long-term dependencies between word sequences using a multilevel hierarchy. We call this model MC/sub n//sup /spl nu//, where n is the maximum number of words in a sequence and /spl nu/ is the maximum number of levels. The originality of this model, which is an extension of the multigrams, is its ability to take into account long distance dependencies according to dependent variable-length sequences. In order to discover the variable-length sequences and to build the hierarchy, we use a set of 233 syntactic classes extracted from eight elementary grammatical classes of French. The MC/sub n//sup /spl nu// model learns hierarchical word patterns and uses them to reevaluate and filter the n-best utterance hypotheses output by our speech recognizer MAUD. The model has been trained on a corpus of 43 million words extracted from the French newspaper Le Monde and uses a vocabulary of 20 000 words. Tests have been conducted on 300 sentences. Compared to the class trigram and the baseline multigrams approach, we report a perplexity reduction of 17% and 20%, respectively. Rescoring the original n-best hypotheses resulted in an improvement of the word error rate: 7% and 2% compared to the class trigram and multigrams, respectively.
更多
查看译文
关键词
speech recognition,grammars,language model,natural languages
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要