Improving Program Testing and Understanding via Symbolic Execution
Ma, Kin Keung
Foster, Jeffrey S
MetadataShow full item record
Symbolic execution is an automated technique for program testing that has recently become practical, thanks to advances in constraint solvers. Generally speak- ing, a symbolic executor interprets a program with symbolic inputs, systematically enumerating execution paths induced by the symbolic inputs and the program's control flow. In this dissertation, I discuss the architecture and implementation of Otter, a symbolic execution framework for C programs, and work that uses Otter to solve two program analysis problems. Firstly, we use Otter to solve the line reachability problem--given a line target in a program, find inputs that drive the program to the line. We propose two new directed search strategies, one using a distance metric to guide symbolic execution towards the target, and another iteratively running symbolic execution from the start of the function containing the target, then jumping backward up the call chain to the start of the program. We compare variants of these strategies with several existing undirected strategies from the literature on a suite of 9 GNU Coreutils programs. We find that most directed strategies perform extremely well in many cases, although they sometimes fail badly. However, by combining the distance metric with a random-path strategy, we obtain a strategy that performs best on average over our benchmarks. We also generalize the line reachability problem to multiple line targets, and evaluate our new strategies under a different experimental setup. The result shows that many directed strategies start off slightly slower than undirected strategies, but they catch up and perform the best in the long run. Another use of Otter is to study how run-time configuration options affect the behavior of configurable software systems. We conjecture that, at certain levels of abstraction, software configuration spaces are much smaller than combinatorics might suggest. To evaluate our conjecture, we ran Otter on three configurable software systems with their concrete test suites, but making configuration options symbolic. Otter generated data of all execution paths of these systems, from which we discovered how the settings of configuration options affect line, basic block, edge, and condition coverage for our subjects under the test suites. Had we instead run the test suites under all configuration settings, it would have required many orders of magnitude more executions to generate the same data.