Evaluating Random Input Generation Strategies for Accessibility Testing

ICEIS: PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS - VOL 2(2021)

引用 0|浏览1
暂无评分
摘要
Mobile accessibility testing is the process of checking whether a mobile app can be perceived, understood, and operated by a wide range of users. Accessibility testing tools can support this activity by automatically generating user inputs to navigate through the app under evaluation and run accessibility checks in each new discovered screen. The algorithm that determines which user input will be generated to simulate the user interaction plays a pivotal role in such an approach. In the state of the art approaches, a Uniform Random algorithm is usually employed. In this paper. we compared the results of the default algorithm implemented by a state of the art tool with four different biased random strategies taking into account the number of activities executed, screen states traversed, and accessibility violations revealed. Our results show that the default algorithm had the worst performance while the algorithm biased towards different weights assigned to specific actions and widgets had the best performance.
更多
查看译文
关键词
Accessibility, Automated, Testing, Tool, Evaluation, Random, Mobile
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要