Planting Bugs: A System for Testing Students' Unit Tests.

ITICSE(2015)

引用 1|浏览17
暂无评分
摘要
ABSTRACTAutomated marking of student programming assignments has long been a goal of IT educators. Much of this work has focused on the correctness of small student programs, and only limited attention has been given to systematic assessment of the effectiveness of student testing. In this work, we introduce SAM (the Seeded Auto Marker), a system for automated assessment of student submissions which assesses both program code and unit tests supplied by the students. Our central contribution is the use of programs seeded with specific bugs to analyse the effectiveness of the students' unit tests. Beginning with our intended solution program, and guided by our own set of unit tests, we create a suite of minor variations to the solution, each seeded with a single error. Ideally, a student's unit tests should not only identify the presence of the bug, but should do so via the failure of as small a number of tests as possible, indicating focused test cases with minimal redundancy. We describe our system, the creation of seeded test programs and report our experiences in using the approach in practice. In particular, we find that students often fail to provide appropriate coverage, and that their tests frequently suffer from their poor understanding of the limitations imposed by the abstraction.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要