Ensuring Honest Effort in Peer Grading

Chakraborty Anujit, Jindal Jatin,Nath Swaprava

arxiv(2019)

引用 0|浏览25
暂无评分
摘要
Massive open online courses (MOOCs) pose a great challenge for grading the huge number of answer-scripts at high accuracy. Peer grading is a scalable solution to this challenge, but the current practices largely depend on the altruism of the peer graders. Some peer-grading approaches treat it as a best-effort service of the graders, and statistically correct their inaccuracies before awarding the final scores. Approaches that incentivize non-strategic behavior of the peer graders do not make use of certain possible additional information, e.g., that the true grade can eventually be observed at the additional cost of the teaching staff time if an affected student raises a regrading request. In this paper, we use such additional information and introduce a mechanism, TRUPEQA, that (a) uses a constant number of instructor-graded answer-scripts to quantitatively measure the accuracies of the peer graders and corrects the scores accordingly, and (b) penalizes deliberate under-performing. We show that this mechanism is unique in its class to satisfy certain properties. Our human subject experiments show that TRUPEQA improves the grading quality over the mechanisms currently used in standard MOOCs.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要