Truthful Peer Grading with Limited Effort from Teaching Staff.

arXiv: Computer Science and Game Theory(2018)

引用 23|浏览23
暂无评分
摘要
Massive open online courses pose a massive challenge for grading the answerscripts at a high accuracy. Peer grading is often viewed as a scalable solution to this challenge, which largely depends on the altruism of the peer graders. Some approaches in the literature treat peer grading as a u0027best-effort serviceu0027 of the graders, and statistically correct their inaccuracies before awarding the final scores, but ignore gradersu0027 strategic behavior. Few other approaches incentivize non-manipulative actions of the peer graders but do not make use of certain additional information that is potentially available in a peer grading setting, e.g., the true grade can eventually be observed at an additional cost. This cost can be thought of as an additional effort from the teaching staff if they had to finally take a look at the corrected papers post peer grading. In this paper, we use such additional information and introduce a mechanism, TRUPEQA, that (a) uses a constant number of instructor-graded answerscripts to quantitatively measure the accuracies of the peer graders and corrects the scores accordingly, (b) ensures truthful revelation of their observed grades, (c) penalizes manipulation, but not inaccuracy, and (d) reduces the total cost of arriving at the true grades, i.e., the additional person-hours of the teaching staff. We show that this mechanism outperforms several standard peer grading techniques used in practice, even at times when the graders are non-manipulative.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要