Saying źhi!ź is not enough: mining inputs for effective test generation.

ASE(2017)

引用 13|浏览6
暂无评分
摘要
Automatically generating unit tests is a powerful approach to exercise complex software. Unfortunately, current techniques often fail to provide relevant input values, such as strings that bypass domain-specific sanity checks. As a result, state-of-the-art techniques are effective for generic classes, such as collections, but less successful for domain-specific software. This paper presents TestMiner, the first technique for mining a corpus of existing tests for input values to be used by test generators for effectively testing software not in the corpus. The main idea is to extract literals from thousands of tests and to adapt information retrieval techniques to find values suitable for a particular domain. Evaluating the approach with 40 Java classes from 18 different projects shows that TestMiner improves test coverage by 21% over an existing test generator. The approach can be integrated into various test generators in a straightforward way, increasing their effectiveness on previously difficult-to-test classes.
更多
查看译文
关键词
TestMiner,test coverage,test generators,difficult-to-test classes,mining inputs,effective test generation,complex software,relevant input values,state-of-the-art techniques,generic classes,domain-specific software,information retrieval techniques,unit tests automatic generation,domain-specific sanity checks,Java classes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要