Testing is a crucial part of any software development process. Testing is also very expensive: Common estimations list the effort of software testing at 50% of the average budget. Recent studies suggest that 77% of the time that software developers spend with testing is used for reading tests. Tests are read when they are generated, when they are updated, fixed, or refactored, when they serve as API usage examples and specification, or during debugging. Reading and understanding tests can be challenging, and evidence suggests that, despite the popularity of unit testing frameworks and test-driven development, the majority of software developers do not practice testing actively. Automatically generated tests tend to be particularly unreadable, severely inhibiting the widespread use of automated test generation in practice. The effects of insufficient testing can be dramatic, with large economic damage, and the potential to harm people relying on software in safety critical applications.
Our proposed solution to address this problem is to improve the effectiveness and efficiency of testing by improving the readability of tests. We will investigate which syntactic and semantic aspects make tests readable, such that we can make readability measurable by modelling it. This, in turn, will allow us to provide techniques that guide manual or automatic improvement of the readability of software tests. This is made possible by a unique combination of machine learning, crowd sourcing, and search-based testing techniques. The GReaTest project will provide tools to developers that help them to identify readability problems, to automatically improve readability, and to automatically generate readability optimised test suites. The importance of readability and the usefulness of readability improvement will be evaluated with a range of empirical studies in conjunction with our industrial collaborators Microsoft, Google, and Barclays, investigating the relation of test readability to fault finding effectiveness, developer productivity, and software quality.
Automated analysis and optimisation of test readability is novel, and traditional analyses only focused on easily measurable program aspects, such as code coverage. Improving readability of software tests has a direct impact on industry, where testing is a major economic and technical factor: More readable tests will reduce the costs of testing and increase effectiveness, thus improving software quality. Readability optimisation will be a key enabler for automated test generation in practice. Once readability of software tests is understood, this opens the doors to a new research direction on analysis and improvement of other software artefacts based on human understanding and performance.
|