FCEVAL: An effective and quantitative platform for evaluating fuzzer combinations fairly and easily

Xiaoyun Zhao,Chao Yang, Zhizhuang Jia,Yue Wang,Jianfeng Ma

Computers & Security(2023)

引用 0|浏览24
暂无评分
摘要
Multiple base fuzzers collaborate as a fuzzer combination. Fuzzer combinations have been proven to perform more robustly and efficiently when fuzzing complicated real-world programs. The efficiency of finding bugs with limited computational resources would greatly benefit from the fuzzer combinations chosen by an effective and quantitative performance evaluation. However, evaluating fuzzer combinations remains challenging due to the lack of infrastructure for collaborative fuzzing, high enough collaboration efficiency of base fuzzers, unified benchmarks, comprehensive metrics, and unified analysis methods of coverage and bugs. This prevents us from selecting efficient fuzzer combinations and thus impairs vulnerability mining on real-world targets. In this paper, we design and implement FCEVAL, the first open-source platform for evaluating fuzzer combinations. In detail, we propose a new test case-sharing policy for increasing fuzzing potential so that we can provide a more efficient running environment for fuzzer combinations and thus improve evaluation effectiveness. Also, we select a unified set of diverse benchmarks and comprehensive metrics while adopting unified independent methods of real-time coverage statistics and multiple-sanitizers-based bug analysis for evaluation fairness and quantification. In addition, we design tools and guidelines covering the whole evaluation process for usability. With the above methodologies, we first construct an infrastructure special for collaborative fuzzing as the base of FCEVAL. After comparing two test case-sharing policies on the infrastructure and choosing the promising one as a substantial part of FCEVAL, we leverage FCEVAL to evaluate fuzzer combinations for more than 40,000 CPU hours and come up with five important conclusions, including (a) an efficient test case-sharing policy improving fuzzing potential and thus evaluation effectiveness, (b) comprehensive metrics being essential, (c) 24-hour duration and 20 repetitions for evaluation being substantial, (d) independent analysis methods of code coverage and bugs deserving of extensive adoption, and (e) FCEVAL being able to evaluate fuzzer combinations effectively, fairly, comprehensively, and easily. Meanwhile, we suggest how to improve collaborative fuzzing. In addition, source codes and test data are publicly available.
更多
查看译文
关键词
Fuzzer combination, Evaluation, Benchmark, Metric, Collaborative fuzzing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要