Benchmarking the Capability of Symbolic Execution Tools with Logic Bombs

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors H. Xu, Zirui Zhao, Yangfan Zhou, Michael R. Lyu
Journal/Conference Name I
Paper Category
Paper Abstract Symbolic execution has become an indispensable technique for software testing and program analysis. However, since several symbolic execution tools are presently available off-the-shelf, there is a need for a practical benchmarking approach. This paper introduces a fresh approach that can help benchmark symbolic execution tools in a fine-grained and efficient manner. The approach evaluates the performance of such tools against known challenges faced by general symbolic execution techniques, e.g., floating-point numbers and symbolic memories. We first survey related papers and systematize the challenges of symbolic execution. We extract 12 distinct challenges from the literature and categorize them into two categories symbolic-reasoning challenges and path-explosion challenges. Next, we develop a dataset of logic bombs and a framework for benchmarking symbolic execution tools automatically. For each challenge, our dataset contains several logic bombs, each addressing a specific challenging problem. Triggering one or more logic bombs confirms that the symbolic execution tool in question is able to handle the corresponding problem. Real-world experiments with three popular symbolic execution tools, namely, KLEE, angr, and Triton have shown that our approach can reveal the capabilities and limitations of the tools in handling specific issues accurately and efficiently. The benchmarking process generally takes only a few dozens of minutes to evaluate a tool. We have released our dataset on GitHub as open source, with an aim to better facilitate the community to conduct future work on benchmarking symbolic execution tools.
Date of publication 2020
Code Programming Language C
Comment

Copyright Researcher 2022