In software testing, the term test coverage refers to how much of an application’s functionality is covered by test cases. In practice, the term also often refers to the effectiveness of that testing.
QA teams use test coverage as a benchmark because it tends to correlate closely with the quality of the end product. Better test coverage typically means fewer bugs get shipped to production.
It’s important to make the distinction that while improving test coverage usually means doing more testing, doing more testing isn’t the goal.
If you’re not testing the right things, more testing just means more work—especially if you’re talking about writing and maintaining automated tests. You could have a test suite of 500 super-detailed tests and have less effective test coverage than someone who is covering the most critical features of their app with only 50 tests.
So before you start adding tests at random, you need to develop a strategy to ensure that each test you add will actually improve your test coverage, instead of just making extra work for yourself.
At Rainforest QA, we’ve developed an approach to test coverage that we call the Snowplow Strategy.
The Snowplow Strategy for Test Coverage
Think of all of the possible user paths through an app like a city map with hundreds of streets. After a blizzard, snowplows work to clear the most-trafficked streets first because they affect the most people. Hours or even days later, they might make it onto the side streets, and some streets never get plowed in large cities.
Likewise, after each software release, you should prioritize testing the most important user paths to make sure they are working properly. Think of these user paths like the arterial routes through a city. If you make sure those are working smoothly after each release, you’ll maximize the impact of your testing efforts.
We use the Snowplow Strategy to test our own product, and we’ve taught it to hundreds of QA teams to help them get the maximum benefit from their test automation.
In this article, we’ll walk you through how to use the Snowplow Strategy to improve test coverage in five steps:
- Step 1: Develop metrics for defining good test coverage at your company.
- Step 2: Map out all your app’s features and user scenarios and rank by priority.
- Step 3: Find the gaps in your current test plan.
- Step 4: Use automation tools like Rainforest QA to ramp up test coverage.
- Step 5: Add tests as your app gets bigger and more complex to maintain good coverage.
We built Rainforest QA, a no-code test automation solution, because we saw that one of the biggest barriers to improving test coverage was how difficult it was to introduce automation.
The technical skills needed to use the existing tools and open-source frameworks meant that a lot of people who cared about quality and wanted to contribute were unable to do so. With Rainforest QA, anyone can begin improving their test coverage with automation in just a few minutes. Talk to us about setting up a Rainforest account.
Step 1: Develop Metrics for Defining Good Test Coverage at Your Company
Some software testing blogs recommend using a formula that presents test coverage as a ratio of the number of features you’re testing to the total number of features. Or even the number of lines of code you’re testing to total lines of code, which measures code coverage.
We’re not going to get into code coverage in this piece because that’s more relevant to unit testing (also called white box testing). For functional testing (also called black box testing), you’ll often see this formula written like this:
We’re not a big fan of using this methodology to evaluate test coverage, because it makes it sound like reaching 100% test coverage is the goal, and it’s just not.
Here’s a few reasons why:
- Every test isn’t created equal. Even though each test takes around the same amount of time to create and maintain, and costs approximately the same amount of money to run, each test does not have the same impact on quality.
- Maintaining 100% test coverage is impossible. A test suite that covers 100% of the possible user paths and features of an app would be completely unsustainable to maintain. And what’s worse, the vast majority of the bugs it would uncover would be too niche for developers to ever have time to fix. So why bother testing for them?
- The test coverage formula says nothing about the effectiveness of your QA program. A raw percentage of features tested tells you nothing about whether you’re testing the most important parts of your app or whether your testing is contributing to product quality.
Instead, we recommend that QA teams establish test coverage metrics that tie back to what their business cares about.
For many software teams, these are things like:
- Customer growth and retention
- Security of user data
- Speed of new feature releases
In an agile approach to software development, or in a continuous integration / continuous delivery (CI/CD) pipeline, teams often have to balance their goals for improving test coverage with their goals for releasing new features fast enough to keep up with the market needs.
But if your app collects sensitive user data, like in the healthcare industry or financial services, security could outrank speed in importance.
Each company has different priorities, based on the market dynamics in their industry, their customers’ tolerance for bugs, the speed of their competitors, and many other factors. This is why “good” test coverage looks different at each company and why it’s impossible to say what number of tests will provide good test coverage for each company.
The metrics you choose to evaluate test coverage should align with the tradeoffs you’re willing to make as a company between quality of the end product, speed of new releases, resources invested in QA, etc. Members of every department should help define these metrics.
Step 2: Map Out All Your App’s Features and User Scenarios and Rank by Priority
Remember the Snowplow Strategy? This is where it comes into play.
If you haven’t already, map out all of the different features of your app in a spreadsheet, as in this example. This is often called a test coverage map.
Within each feature, identify the most common user scenarios, and then rank them by priority (Level 1 = Absolutely Essential, Level 2 = Very Important, Level 3 = Somewhat Important, Level 4 = Nice to Have, etc.). If you’re using a test automation tool like Rainforest QA, you can assign test priority directly in the app.
Take the same approach that a city traffic engineer would take when designing plow routes. Prioritize the most heavily trafficked user paths (the freeways), and paths that directly impact revenue (routes to grocery stores and malls), and paths that allow people to get help when they’re in trouble (routes to hospitals).
It’s helpful to evaluate the importance of each feature by asking what happens when it fails. If this feature doesn’t work, will the company’s revenue go down? Will customers be unable to make a purchase? Will they even be able to log onto the app?
Again, it’s important to involve more than just the QA team in these decisions. Defining coverage goals needs to be an activity with engagement from subject matter experts in the organization.
Step 3: Find the Gaps in Your Current Test Plan
Now that you’ve mapped out your app, it’s time to find the gaps in your current testing strategy.
The simplest way to do this is to look at your test plan and see which Level 1 priority tests you’re not doing. These gaps are leaving you exposed to critical bugs that could cripple your company. The first step to improving test coverage is to fill those gaps.
When analyzing test coverage, there are a few factors to consider besides the number of features you’re testing:
- How many of our most important features/user paths are we testing? You can use the same basic formula structure as the one we criticized above, as long as you measure against the total number of critical features. This could be as low as one or two percent of your total features.
- Are we testing on all of the most popular browsers/operating systems that our users use? Most teams aim to test on the four major browsers: Safari, Microsoft Edge, Chrome, and Firefox.
- Do our tests mimic the real life conditions our users experience when using our app? Network traffic from other users, time of day, and geographic location of the user can all impact how an app behaves in real life, so it’s often helpful to recreate these conditions (and others) in testing.
- How often and how consistently are we testing each feature? Consistently, regularly running the same tests tends to result in better software quality than randomly running some tests some of the time and only occasionally getting to other tests because you run out of time. You could have a test suite that covers all your critical features, but if you’re only running 50% of your test suite with each release, you have some potentially costly gaps.
- What user behaviors and scenarios are we testing? Testing happy paths (when users use everything as intended) is more important than testing edge cases, but if you’re covering all of your happy paths, it might be worth it to start testing how the app handles user errors and other less common user behaviors.
Answering these questions will help you identify the biggest holes in your current test coverage as compared to your coverage map.
But how do you know that your coverage map appropriately identifies the most important user paths? How do you know you’re not missing something crucial?
You can look at the metrics you’re tracking to evaluate your overall test coverage. If those metrics are improving, you’re probably testing the right things. But you can also compare the priorities your team came up with against actual user behavior using Google Analytics.
Using Google Analytics to Find Test Coverage Gaps
If you run your test suite on a dedicated staging server, you can use Google Analytics to compare which parts of the app get the most traffic when you run your tests, versus which parts of the app get the most usage on the production server.
Ideally, the traffic patterns on both versions of your app should be very similar. Whether you’re doing manual or automated tests, the test traffic should mimic user traffic. If there’s a section of the app that gets a ton of user traffic but very little traffic from the test suite, that’s a potentially critical gap in your testing coverage.
Step 4: Use No-Code Automation Tools Like Rainforest QA to Ramp Up Test Coverage
Once you have defined your best case scenario of test coverage, it’s time to start adding tests to get closer to that goal.
Automation is the most efficient way to scale up test coverage because it allows you to use the time you would have spent running tests to plan new test scenarios and build new tests.
If you’re doing manual testing, then there’s a linear relationship between test coverage and time spent testing. The only way to increase test coverage is to add headcount—both testers and someone to manage the testers (and you probably also need some kind of test management software like TestRail to organize the results).
Automation is way more accessible to small QA teams today than it used to be because of no-code automated testing tools. With no-code automation, anyone on your team can improve test coverage without learning how to use one of the open-source automation frameworks like Selenium.
A common mistake QA teams make when they start automating tests is thinking that they need to build a complete suite of automated tests before they can start running any of them. But automation doesn’t have to be all or nothing. If you automate just five or 10 of the tests you’re already doing manually, you’ll free yourself up to increase manual test coverage.
Automating regression testing is a great place to start because it’s so repetitive. You’re doing the same test cases over and over every time you release something new, so automation can offload these repetitive tasks. And regression testing tends to be the main bottleneck to releasing. If you automate it, you’ll ship faster.
With our tool, Rainforest QA, even a single non-technical QA specialist can build a whole suite of automated regression tests and help your team improve test coverage.
If you want to learn how to start doing automated testing without having to learn Selenium, check out our in-depth article about how to automate testing or talk to us about setting up a Rainforest plan that fits your needs.
Step 5: Add Tests as Your App Gets Bigger and More Complex to Maintain Good Coverage
Maintaining good test coverage takes consistent effort, even after you have automated your entire test suite.
Whenever you add a new feature to your app, you need to add test cases to your regression suite that cover the most critical user paths in that feature.
Back to our snowplow analogy, think about the city engineer revising the plow map when a new neighborhood opens. And if that feature is especially popular, it could change the priority ranking of some of the tests related to that feature, just like a new subdivision can change the traffic patterns in a city.
Often, the pressure to ship new releases quickly will force QA teams to skip building automated tests for new features and just test them manually. But when the next feature comes along, if the regression tests for the last feature haven’t been added to the suite, the whole testing process will take much longer.
Be diligent about keeping a recorded backlog of new tests that you need to write as new bugs are found and new features are developed. You will inevitably fall behind, but having a backlog that anyone can view allows people to jump in and add high priority tests whenever they have a spare minute.
An easy way to keep a shared backlog in Rainforest is to create a new folder whenever you add a new feature to your app. Even if you don’t have time to create a single test for that feature, the empty folder serves as a reminder that you need to create more tests.
You could also consider using Rainforest’s test automation services to increase test coverage if you just don’t have time to write enough tests and it’s becoming a bottleneck.
The last thing to keep in mind when it comes to maintaining good test coverage is that there is a point where you reach diminishing returns, as illustrated in the diagram below. We talk about this more in our Introduction to QA Strategy guide.
Instead of adding more tests, it might be more important to just run the tests you already have more often.
Use the Snowplow Strategy to Improve Test Coverage with Rainforest QA
Using the Snowplow Strategy for improving test coverage will make sure that all the time you invest in building out your test suite will translate to improved test coverage and improved quality, not just busy work for your QA team.
It will save you time building tests, running them, and maintaining them, and help you make sure that critical bugs don’t make it through to production.
Looking for an easy-to-use, no-code automation tool to help improve test coverage? Talk to us about setting up a Rainforest plan that fits your needs.