Nudging student learning strategies using formative feedback in automatically graded assessments.

SPLASH-E(2020)

引用 6|浏览3
暂无评分
摘要
Automated assessment tools are widely used as a means for providing formative feedback to undergraduate students in computer science courses while helping those courses simultaneously scale to meet student demand. While formative feedback is a laudable goal, we have observed many students trying to debug their solutions into existence using only the feedback given, while losing context of the learning goals intended by the course staff. In this paper, we detail two case studies from second and third-year undergraduate software engineering courses indicating that using only nudges about where students should focus their efforts can improve how they act on generated feedback. By carefully reasoning about errors uncovered by our automated assessment approaches, we have been able to create feedback for students that helps them to revisit the learning outcomes for the assignment or course. This approach has been applied to both multiple-choice feedback in an online quiz taking system and automated assessment of student programming tasks. We have found that student performance has not suffered and that students reflect positively about how they investigate automated assessment failures.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要