Improving Testing by Mimicking User Behavior

2020 IEEE International Conference on Software Maintenance and Evolution (ICSME)(2020)

引用 4|浏览19
暂无评分
摘要
In-house tests are hardly representative of the rich variety of software behaviors exercised by real users in the field. To bridge the gap between in-house tests and field executions, we need ways to (1) identify behavior exercised in the field but not in-house, and (2) generate new tests that exercise such (or at least similar) behavior. In this context, we propose Replica, a technique that uses field execution data to guide test generation. Replica instruments the software before deploying it, so that field data collection is triggered when a user exercises an untested behavior. Then, when it receives the collected field data, Replica uses guided symbolic execution to generate executions that exercise this previously untested behavior. Our empirical evaluation shows that Replica can successfully generate tests that mimic field executions in terms of both behaviors exercised and faults detected. Our results also show that Replica can outperform a state-of-the-art input-generation technique that does not leverage field data.
更多
查看译文
关键词
software testing,field data,test-input generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要