Toward Computer-Aided Assessment of Textual Exercises in Very Large Courses

Computer Science Education(2021)

引用 0|浏览0
暂无评分
摘要
ABSTRACTFirst-year computer science university courses commonly exceed the 1,750-student mark. This rising demand for higher education leads to an increased workload for instructors. Open-ended textual exercises facilitate the comprehension of problem-solving skills. Offering individual feedback to help learners increase their existing knowledge by learning from mistakes is challenging: Grading textual exercises is a manual, repetitive, and time-consuming task, seldomly aided by technology. In most cases, many graders have to be employed to accommodate large student populations. We present Athene, a computer-aided system for grading textual exercises at scale, integrated into the Artemis learning-platform. We use topic modeling to split student answers into segments. Language embeddings and clustering sort the segments into groups based on similarity. To aid graders, the system uses similarity metrics to create pre-gradings for each segment's feedback and score. Further, we use these metrics to detect grading inconsistencies to facilitate consistent grading at scale. We used Athene to grade 17 open-ended textual exercises in an introductory software engineering course at the Technical University of Munich with 1,800 registered students. 26% of the total assessments were pre-graded by Athene. 85% of these pre-graded assessments were not changed by an instructor. 5% were extended with a comment, and instructors overwrote 10%. Future work focuses on increasing the percentage of computer-aided assessments, automation of the grading process, and language-independent grading.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要