TECH.insight

Automating front end code coverage

Monday 16 March 2015

By measuring code coverage, it becomes obvious where problems lie

Measuring unit test code coverage is an important practice in ascertaining where weaknesses are within a software system. In this article, I will show how to apply this practice to code that runs in the front end – within the web browser itself – and what tools are available to test our code and measure code coverage.

Code coverage is important to understand, as it will identify gaps in your project not covered by unit and other tests, enabling developers to identify where additional tests might need to be added to ensure high code quality. It is therefore important to note that coverage represents the quality of your tests rather than the quality of the code.

Code coverage works by generating an alternative form of the JavaScript code file you’re testing, wrapping each of your code statements in a measurement function. A record is then kept of which functions were executed and which were not when your unit tests are run against it, giving an indication of what percentage of the total original code is covered by your tests – hence the term ‘code coverage’.

There are a number of code coverage solutions in the wild and (largely) the choice comes down to what your developers are happy with, and what suits the project. Some of the most popular choices include: SonarQube, squale, coverity, and cast, though more are available.

In this article, I am going to focus on how to use SonarQube to measure code coverage, and how you can set up your front-end project to connect to SonarQube to generate reports on your code. I will assume you already know how to write about tests, and will identify the tools and setup you need to demonstrate code coverage to stakeholders.

SonarQube supports multiple programming languages via plugins for each language. To measure code coverage for a specific language, simply install its plugin from the list of all supported languages. It’s also important to point out that the plugins rely on a set of rules that (in some cases) may or may not be valid to a particular project, or the reported severity of a particular issue may differ based on developer preference. It’s therefore worth reviewing these and setting them correctly for your project, otherwise the coverage data will not be accurate.

At the start of any project, I recommend you review and agree these language plugin rules within your team. Not everyone will agree with these rules, but it’s important to agree to keep consistency within the code base. Including all developers in this decision will create a clear understanding throughout the team as to what is expected from their code, and ensure there is little room for misunderstanding. For the results to be effective, everyone has to agree – and a good way to start is by everyone contributing to this decision upfront.

Before we set up our project with SonarQube, we need to identify a test runner to use for our JavaScript tests so that we can produce an LCOV-format file. This file forms a part of the data we send to SonarQube to report our code coverage results.

Here, I will be using the Jasmine test framework and Istanbul to generate the coverage report. But other JavaScript test frameworks can be used, depending on what developers are happy with and provided an LCOV-format file can be generated.

Using Grunt with SonarQube

If you are already using the Grunt task runner with your front-end project, I’ll demonstrate how you can use a Grunt task to send your front-end project code to SonarQube. Or you can use the Jenkins continuous integration server to trigger sending the code coverage report to SonarQube using the Jenkins plugin.

To use SonarQube with Grunt, we’ll need the Sonar-Runner plugin, together with the grunt-contrib-jasmine plugin to perform the tests. And we’ll need the grunt-template-jasmine-istanbul plugin to generate the LCOV-format code coverage report from the Jasmine tests.

Configuring Jasmine and Istanbul

The template plugin grunt-template-jasmine-istanbul is designed to work directly with the grunt-contrib-jasmine plugin. Connecting it is therefore reasonably simple, as shown below in an example Gruntfile.js configuration. Here, I assume that your JavaScript files are in a folder called scripts and the unit test scripts are stored in a tests folder. The code coverage report files will be generated into a folder called reports.

grunt.initConfig({
    jasmine: {
        coverage: {
            src: 'scripts/*.js',
            options: {
                specs: 'tests/*.js',
                template: require('grunt-template-jasmine-istanbul'),
                templateOptions: {
                    coverage: 'reports/coverage.json',
                    report: [
                        {
                            type: 'lcov',
                            options: {
                                dir: 'reports/lcov'
                            }
                        },
                        {
                            type: 'text-summary'
                        }
                    ]
                }
            }
        }
    }
    ...
});

We can then use the Sonar-Runner Grunt task to read the generated LCOV-format report and send this to a SonarQube instance running on a server. In the example below, this is assumed to be running on localhost:9000 with its storage database running on localhost:3306.

Configure the Grunt Sonar-Runner plugin task

grunt.initConfig({
    ...

    sonarRunner: {
        analysis: {
            options: {
                sonar: {
                    host: {
                        url: 'http://localhost:9000'
                    },
                    jdbc: {
                        url: 'jdbc:mysql://localhost:3306/sonar',
                        username: 'your-username-here',
                        password: 'your-password-here'
                    },
                    projectKey: 'your-unique-project-key-here',
                    projectName: 'Your Project Name Here',
                    projectVersion: '0.0.1',
                    sources: 'scripts',
                    tests: 'tests',
                    javascript: {
                        lcov: {
                            reportPath: "reports/lcov/lcov.info"
                        }
                    },
                    sourceEncoding: 'UTF-8'
                }
            }
        }
    }
})

Configure the Grunt task

We can now create a single Grunt task to run our tests, generate our code coverage reports, and send the resulting data to SonarQube, as shown below.

grunt.registerTask('code-coverage', [jasmine', 'sonarRunner']);

By running grunt code-coverage on the terminal or within your automated build system, the code coverage data is generated and transmitted to SonarQube, which will then display the quality of the code.

Conclusion

By measuring code coverage, it becomes obvious where problems lie and where developer time should be assigned to raise code quality.

Providing a visual representation of the project’s code coverage gives stakeholders a view of what is needed and what is not when it comes to investment. This can obviously work as either a positive or a negative, but if everyone understands the results, at least problems can be identified and not only by developers. By automating code coverage as part of your front-end task runner or with your Continuous Integration (CI) server, results can be constantly updated and issues identified and fixed quickly.

You can take this one step further by generating code complexity reports to help drill down into the quality of individual code functions and files. Should you decide to go down this path, Plato is a useful tool.

About The Author

Martin is a Senior Web Developer at ideas & innovation agency AKQA in Berlin. He has passion for web standards and coding practices as well as a love to push the boundaries of the web.

@m4rtshaw