Automated Assessment of Quality and Coverage of Ideas in Students' Source-Based Writing.

AIED (2)(2021)

引用 1|浏览6
暂无评分
摘要
Source-based writing is an important academic skill in higher education, as it helps instructors evaluate students' understanding of subject matter. To assess the potential for supporting instructors' grading, we design an automated assessment tool for students' source-based summaries with natural language processing techniques. It includes a special-purpose parser that decomposes the sentences into clauses, a pre-trained semantic representation method, a novel algorithm that allocates ideas into weighted content units and another algorithm for scoring students' writing. We present results on three sets of student writing in higher education: two sets of STEM student writing samples and a set of reasoning sections of case briefs from a law school preparatory course. We show that this tool achieves promising results by correlating well with reliable human rubrics, and by helping instructors identify issues in grades they assign. We then discuss limitations and two improvements: a neural model that learns to decompose complex sentences into simple sentences, and a distinct model that learns a latent representation.
更多
查看译文
关键词
Natural language processing,Content analysis,Rubric-based writing assessment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要