JGLUE: Japanese General Language Understanding Evaluation.

International Conference on Language Resources and Evaluation (LREC)(2022)

引用 3|浏览3
暂无评分
摘要
To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE (Wang et al., 2018), has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE (Xu et al., 2020) for Chinese and FLUE (Le et al., 2020) for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.
更多
查看译文
关键词
language,evaluation,understanding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要