Mimicking user behavior to improve in-house test suites

Proceedings of the 41st International Conference on Software Engineering: Companion Proceedings(2019)

引用 3|浏览28
暂无评分
摘要
Testing is today the most widely used software quality assurance approach. However, it is well known that the necessarily limited number of tests developed and run in-house are not representative of the rich variety of user executions in the field. In order to bridge this gap between in-house tests and field executions, we need a way to (1) identify the behaviors exercised in the field that were not exercised in-house and (2) generate new tests that exercise such behaviors. In this context, we propose Replica, a technique that uses field execution data to guide test generation. Replica instruments the software before deploying it, so that field data collection is triggered when a user exercises an untested behavior B, currently expressed as the violation of an invariant. When it receives the collected field data, Replica uses guided symbolic execution to generate one or more executions that exercise the previously untested behavior B. Our initial empirical evaluation, performed on a set of real user executions, shows that Replica can successfully generate tests that mirror field behaviors and have similar fault-detection capability. Our results also show that Replica can outperform a traditional input generation approach that does not use field-data guidance.
更多
查看译文
关键词
field data, software testing, test generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要