Assessing and Improving the Quality of Generated Tests in the Context of Maintenance Tasks.

COMPSAC(2023)

引用 0|浏览2
暂无评分
摘要
Maintenance tasks often rely on failing test cases, highlighting the importance of well-designed tests for their success. While automatically generated tests can provide higher code coverage and detect faults, it is unclear whether they can be effective in guiding maintenance tasks or if developers fully accept them. In our recent work, we presented the results of a series of empirical studies that evaluated the practical support of generated tests. Our studies with 126 developers showed that automatically generated tests can effectively identify faults during maintenance tasks. Developers were equally effective in creating bug fixes when using manually-written, Evosuite, and Randoop tests. However, developers perceived generated tests as not welldesigned and preferred refactored versions of Randoop tests. We plan to enhance Evosuite tests and propose an approach/tool that assesses the quality of generated tests and automatically enhances them. Our research may impact the design and use of generated tests in the context of maintenance tasks.
更多
查看译文
关键词
generated tests,maintenance tasks,Randoop,Evosuite,test smells,refactoring
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要