Sentence-Level Evaluation Using Co-occurences of N-Grams

ARTIFICIAL NEURAL NETWORKS - ICANN 2008, PT I(2008)

引用 0|浏览0
暂无评分
摘要
This work presents an evaluation method of Greek sentences with respect to word order errors. The evaluation method is based on words' reordering and choosing the version that maximizes the number of trigram hits according to a language model. The new parameter of the proposed technique concerns the incorporation of unigram probability. This probability corresponds to the frequency of each unigram to be posed in the first and in the last position of the training set sentences. The comparative advantage of this method is that it works with a large set of words, and avoids the laborious and costly process of collecting word order errors for creating error patterns.
更多
查看译文
关键词
training set sentence,comparative advantage,evaluation method,error pattern,large set,word order error,unigram probability,probability corresponds,greek sentence,sentence-level evaluation,costly process,language model,word order
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要