Test improvement
Suggestions on Test Suite Improvements with Automatic Infection and Propagation Analysis.
Authors: Oscar Luis Vera-Pérez, Benjamin Danglot, Martin Monperrus and Benoit Baudry
Venue: ArXiv preprint, https://arxiv.org/abs/1909.04770
Publication date: September 10, 2019
Abstract: An extreme transformation removes the body of a method that is reached by one test case at least. If the test suite passes on the original program and still passes after the extreme transformation, the transformation is said to be undetected, and the test suite needs to be improved. In this work we propose a technique to automatically determine which of the following three reasons prevent the detection of the extreme transformation is : the test inputs are not sufficient to infect the state of the program; the infection does not propagate to the test cases; the test cases have a weak oracle that does not observe the infection. We have developed Reneri, a tool that observes the program under test and the test suite in order to determine runtime differences between test runs on the original and the transformed method. The observations gathered during the analysis are processed by Reneri to suggest possible improvements to the developers. We evaluate Reneri on 15 projects and a total of 312 undetected extreme transformations. The tool is able to generate a suggestion for each each undetected transformation. For 63% of the cases, the existing test cases can infect the program state, meaning that undetected transformations are mostly due to observability and weak oracle issues. Interviews with developers confirm the relevance of the suggested improvements and experiments with state of the art automatic test generation tools indicate that no tool can improve the existing test suites to fix all undetected transformations.
Tags: Software testing test oracle test improvement reachability-infection-propagation extreme transformation
Automatic test improvement with DSpot: a study with ten mature open-source projects.
Authors: Benjamin Danglot, Oscar Luis Vera-Pérez, Benoit Baudry & Martin Monperrus
Venue: Empirical Software Engineering 24, 2603–2635 (2019). https://doi.org/10.1007/s10664-019-09692-y
Publication date: April 24, 2019
Abstract: In the literature, there is a rather clear segregation between manually written tests by developers and automatically generated ones. In this paper, we explore a third solution: to automatically improve existing test cases written by developers. We present the concept, design and implementation of a system called DSpot, that takes developer-written test cases as input (JUnit tests in Java) and synthesizes improved versions of them as output. Those test improvements are given back to developers as patches or pull requests, that can be directly integrated in the main branch of the test code base. We have evaluated DSpot in a deep, systematic manner over 40 real-world unit test classes from 10 notable and open-source software projects. We have amplified all test methods from those 40 unit test classes. In 26/40 cases, DSpot is able to automatically improve the test under study, by triggering new behaviors and adding new valuable assertions. Next, for ten projects under consideration, we have proposed a test improvement automatically synthesized by DSpot to the lead developers. In total, 13/19 proposed test improvements were accepted by the developers and merged into the main code base. This shows that DSpot is capable of automatically improving unit-tests in real-world, large scale Java software.
Tags: Test improvement Junit test Pull request empirical study