The Snowplow Strategy: Improve Automation Test Coverage in Five Steps

In software testing, the term test coverage refers to how much of an application’s functionality is covered by test cases. In practice, the term also often refers to the effectiveness of that testing. 

QA teams use test coverage as a benchmark because it tends to correlate closely with the quality of the end product. Better test coverage typically means fewer bugs get shipped to production. 

It’s important to make the distinction that while improving test coverage usually means doing more testing, doing more testing isn’t the goal. 

If you’re not testing the right things, more testing just means more work—especially if you’re talking about writing and maintaining automated tests. You could have a test suite of 500 super-detailed tests and have less effective test coverage than someone who is covering the most critical features of their app with only 50 tests.

So before you start adding tests at random, you need to develop a strategy to ensure that each test you add will actually improve your test coverage, instead of just making extra work for yourself. 

At Rainforest QA, we’ve developed an approach to test coverage that we call the Snowplow Strategy.

The Snowplow Strategy for Test Coverage

Think of all of the possible user paths through an app like a city map with hundreds of streets. After a blizzard, snowplows work to clear the most-trafficked streets first because they affect the most people. Hours or even days later, they might make it onto the side streets, and some streets never get plowed in large cities.  

Likewise, after each software release, you should prioritize testing the most important user paths to make sure they are working properly. Think of these user paths like the arterial routes through a city. If you make sure those are working smoothly after each release, you’ll maximize the impact of your testing efforts. 

We use the Snowplow Strategy to test our own product, and we’ve taught it to hundreds of QA teams to help them get the maximum benefit from their test automation. 

In this article, we’ll walk you through how to use the Snowplow Strategy to improve test coverage in five steps:

  • Step 1: Develop metrics for defining good test coverage at your company.
  • Step 2: Map out all your app’s features and user scenarios and rank by priority.
  • Step 3: Find the gaps in your current test plan.
  • Step 4: Use automation tools like Rainforest QA to ramp up test coverage.
  • Step 5: Add tests as your app gets bigger and more complex to maintain good coverage.

We built Rainforest QA, a no-code test automation solution, because we saw that one of the biggest barriers to improving test coverage was how difficult it was to introduce automation. 

The technical skills needed to use the existing tools and open-source frameworks meant that a lot of people who cared about quality and wanted to contribute were unable to do so. With Rainforest QA, anyone can begin improving their test coverage with automation in just a few minutes. Try it yourself with our free 14-day trial. 

Step 1: Develop Metrics for Defining Good Test Coverage at Your Company

Some software testing blogs recommend using a formula that presents test coverage as a ratio of the number of features you’re testing to the total number of features. Or even the number of lines of code you’re testing to total lines of code, which measures code coverage. 

We’re not going to get into code coverage in this piece because that’s more relevant to unit testing (also called white box testing). For functional testing (also called black box testing), you’ll often see this formula written like this: 

This is the typical formula for test coverage: Number of features covered with test cases divide y the total number of features equals the percentage test coverage

We’re not a big fan of using this methodology to evaluate test coverage, because it makes it sound like reaching 100% test coverage is the goal, and it’s just not. 

Here’s a few reasons why:

  1. Every test isn’t created equal. Even though each test takes around the same amount of time to create and maintain, and costs approximately the same amount of money to run, each test does not have the same impact on quality.
  2. Maintaining 100% test coverage is impossible. A test suite that covers 100% of the possible user paths and features of an app would be completely unsustainable to maintain. And what’s worse, the vast majority of the bugs it would uncover would be too niche for developers to ever have time to fix. So why bother testing for them? 
  3. The test coverage formula says nothing about the effectiveness of your QA program. A raw percentage of features tested tells you nothing about whether you’re testing the most important parts of your app or whether your testing is contributing to product quality. 

Instead, we recommend that QA teams establish test coverage metrics that tie back to what their business cares about. 

For many software teams, these are things like:

  • Customer growth and retention
  • Security of user data
  • Speed of new feature releases

In an agile approach to software development, or in a continuous integration / continuous delivery (CI/CD) pipeline, teams often have to balance their goals for improving test coverage with their goals for releasing new features fast enough to keep up with the market needs. 

But if your app collects sensitive user data, like in the healthcare industry or financial services, security could outrank speed in importance. 

Each company has different priorities, based on the market dynamics in their industry, their customers’ tolerance for bugs, the speed of their competitors, and many other factors. This is why “good” test coverage looks different at each company and why it’s impossible to say what number of tests will provide good test coverage for each company.

The metrics you choose to evaluate test coverage should align with the tradeoffs you’re willing to make as a company between quality of the end product, speed of new releases, resources invested in QA, etc. Members of every department should help define these metrics. 

Step 2: Map Out All Your App’s Features and User Scenarios and Rank by Priority

Remember the Snowplow Strategy? This is where it comes into play. 

If you haven’t already, map out all of the different features of your app in a spreadsheet, as in this example. This is often called a test coverage map.

Within each feature, identify the most common user scenarios, and then rank them by priority (Level 1 = Absolutely Essential, Level 2 = Very Important, Level 3 = Somewhat Important, Level 4 = Nice to Have, etc.). If you’re using a test automation tool like Rainforest QA, you can assign test priority directly in the app.

Take the same approach that a city traffic engineer would take when designing plow routes. Prioritize the most heavily trafficked user paths (the freeways), and paths that directly impact revenue (routes to grocery stores and malls), and paths that allow people to get help when they’re in trouble (routes to hospitals).

It’s helpful to evaluate the importance of each feature by asking what happens when it fails. If this feature doesn’t work, will the company’s revenue go down? Will customers be unable to make a purchase? Will they even be able to log onto the app? 

Again, it’s important to involve more than just the QA team in these decisions. Defining coverage goals needs to be an activity with engagement from subject matter experts in the organization. 

Step 3: Find the Gaps in Your Current Test Plan

Now that you’ve mapped out your app, it’s time to find the gaps in your current testing strategy. 

The simplest way to do this is to look at your test plan and see which Level 1 priority tests you’re not doing. These gaps are leaving you exposed to critical bugs that could cripple your company. The first step to improving test coverage is to fill those gaps. 

When analyzing test coverage, there are a few factors to consider besides the number of features you’re testing:

  • How many of our most important features/user paths are we testing? You can use the same basic formula structure as the one we criticized above, as long as you measure against the total number of critical features. This could be as low as one or two percent of your total features.

  • Are we testing on all of the most popular browsers/operating systems that our users use? Most teams aim to test on the four major browsers: Safari, Microsoft Edge, Chrome, and Firefox.

  • Do our tests mimic the real life conditions our users experience when using our app? Network traffic from other users, time of day, and geographic location of the user can all impact how an app behaves in real life, so it’s often helpful to recreate these conditions (and others) in testing.

  • How often and how consistently are we testing each feature? Consistently, regularly running the same tests tends to result in better software quality than randomly running some tests some of the time and only occasionally getting to other tests because you run out of time. You could have a test suite that covers all your critical features, but if you’re only running 50% of your test suite with each release, you have some potentially costly gaps.

  • What user behaviors and scenarios are we testing? Testing happy paths (when users use everything as intended) is more important than testing edge cases, but if you’re covering all of your happy paths, it might be worth it to start testing how the app handles user errors and other less common user behaviors. 

Answering these questions will help you identify the biggest holes in your current test coverage as compared to your coverage map.

But how do you know that your coverage map appropriately identifies the most important user paths? How do you know you’re not missing something crucial? 

You can look at the metrics you’re tracking to evaluate your overall test coverage. If those metrics are improving, you’re probably testing the right things. But you can also compare the priorities your team came up with against actual user behavior using Google Analytics.

Using Google Analytics to Find Test Coverage Gaps

If you run your test suite on a dedicated staging server, you can use Google Analytics to compare which parts of the app get the most traffic when you run your tests, versus which parts of the app get the most usage on the production server. 

Ideally, the traffic patterns on both versions of your app should be very similar. Whether you’re doing manual or automated tests, the test traffic should mimic user traffic. If there’s a section of the app that gets a ton of user traffic but very little traffic from the test suite, that’s a potentially critical gap in your testing coverage.

Step 4: Use No-Code Automation Tools Like Rainforest QA to Ramp Up Test Coverage

Once you have defined your best case scenario of test coverage, it’s time to start adding tests to get closer to that goal.

Automation is the most efficient way to scale up test coverage because it allows you to use the time you would have spent running tests to plan new test scenarios and build new tests. 

If you’re doing manual testing, then there’s a linear relationship between test coverage and time spent testing. The only way to increase test coverage is to add headcount—both testers and someone to manage the testers (and you probably also need some kind of test management software like TestRail to organize the results).

Automation is way more accessible to small QA teams today than it used to be because of no-code automation tools. With no-code automation, anyone on your team can improve test coverage without learning how to use one of the open-source automation frameworks like Selenium.

A common mistake QA teams make when they start automating tests is thinking that they need to build a complete suite of automated tests before they can start running any of them. But automation doesn’t have to be all or nothing. If you automate just five or 10 of the tests you’re already doing manually, you’ll free yourself up to increase manual test coverage. 

Automating regression testing is a great place to start because it’s so repetitive. You’re doing the same test cases over and over every time you release something new, so automation can offload these repetitive tasks. And regression testing tends to be the main bottleneck to releasing. If you automate it, you’ll ship faster. 

With our tool, Rainforest QA, even a single non-technical QA specialist can build a whole suite of automated regression tests and help your team improve test coverage.

Here are some of the advantages of using Rainforest QA to improve test coverage as compared to doing functional tests manually or with Selenium.

Run Tests More Often

Rainforest QA allows you to run any number of tests in parallel. 

This means you can run tests more often because the time to run your entire test suite is only as long as the longest single test in the suite. But keep in mind that every time you run a test suite, you’ll have to spend time evaluating the results, categorizing failures, sending bugs to developers, etc.

Limit Human Error

Automation removes human error from test execution, giving you more confidence in your testing results. And with Rainforest, you’re not writing any test code when you create the tests like you would in Selenium, so you’re likely to make fewer errors when setting up automation.

Spend Less Time Maintaining Tests

One unavoidable reality of using automation to improve test coverage is that automated tests have to be maintained as the features they are testing change with new app updates. 

The time savings from automation are still more than worth it in most cases, but with Selenium, maintenance can become a major headache because minor code changes that don’t affect the UI can break the tests. 

Unlike Selenium, which tests the code behind an application, Rainforest tests the visual layer of an app. This makes Rainforest tests less susceptible to breaking due to minor code changes that don’t affect the UI.

Let’s say you want to verify that your “Try for Free” button still works after a software release. In Rainforest, you’d create a test that includes a step to click on the “Try for Free” button. You’d tell the test where to click by clicking and dragging a selection box around the button. 

When you run the test, the automation will check whether pixels that match your image selection exist anywhere on the page. If they do, the test will find them and will be able proceed, even if the underlying code changed. 

In contrast, if you were using Selenium to test this button, you could run into a variety of situations where a minor code change that doesn’t affect the UI would cause the test to fail. Selenium uses code snippets to locate elements on the page, instead of pixel matching, so if the locator code changes, the test could fail. 

A Rainforest test would fail only if the actual visual appearance of the button changed. For example, if the button was accidentally hidden due to a bug in a recent code push, the test would (and should) fail.

Rainforest QA tests can still have “false positives”—or, test failures for which there’s no actual bug—because some UI changes, such as a button changing shape or color, can break tests. But most Rainforest users find that test maintenance is less of a headache than with tools like Selenium because:

  1. You have the option to use text matching to avoid test failures when you know the UI of a visual element is likely to change, and
  2. Anyone can quickly update tests in the visual editor without having to involve an engineer or write any code.
Try for Free Button: Allow Text Matching

Anyone Can View, Share, and Understand Test Results

Rainforest automatically creates a sharable record of all of your test results. You can monitor trends in the types of bugs that are being found, the number of bugs, the number of false positives test results vs. false negatives, etc. 

With manual testing, you might not have any records at all other than a check mark in a spreadsheet next to a test case indicating pass/fail. 

Every test also includes a video recording of the test, which makes it easy to find bugs when tests fail. If the test fails because of a legitimate bug, you can create Jira tickets for the development team with one click. If it fails because of an error in the test, you can easily identify the problem and fix it within a few seconds. 

Test on the Most Common Browsers

Rainforest supports automation testing on the Big Four browsers (Safari, Microsoft Edge, Chrome, and Firefox) and all of the major operating systems. 

If you’re doing manual testing or testing with Selenium, you have to set up or purchase a test grid or a physical device lab with different devices loaded up with a bunch of different operating systems and different browsers.

Start Improving Test Coverage without Learning to Code

Rainforest makes it easy for anyone to improve test coverage. If you want to learn how to start doing automated testing without having to learn Selenium, check out our in-depth article about how to automate testing or start a free 14-day trial.

Step 5: Add Tests as Your App Gets Bigger and More Complex to Maintain Good Coverage

Maintaining good test coverage takes consistent effort, even after you have automated your entire test suite. 

Whenever you add a new feature to your app, you need to add test cases to your regression suite that cover the most critical user paths in that feature. 

Back to our snowplow analogy, think about the city engineer revising the plow map when a new neighborhood opens. And if that feature is especially popular, it could change the priority ranking of some of the tests related to that feature, just like a new subdivision can change the traffic patterns in a city.

Often, the pressure to ship new releases quickly will force QA teams to skip building automated tests for new features and just test them manually. But when the next feature comes along, if the regression tests for the last feature haven’t been added to the suite, the whole testing process will take much longer. 

Be diligent about keeping a recorded backlog of new tests that you need to write as new bugs are found and new features are developed. You will inevitably fall behind, but having a backlog that anyone can view allows people to jump in and add high priority tests whenever they have a spare minute.

An easy way to keep a shared backlog in Rainforest is to create a new folder whenever you add a new feature to your app. Even if you don’t have time to create a single test for that feature, the empty folder serves as a reminder that you need to create more tests.

You could also consider using Rainforest’s Test Writing Service to increase test coverage if you just don’t have time to write enough tests and it’s becoming a bottleneck.

The last thing to keep in mind when it comes to maintaining good test coverage is that there is a point where you reach diminishing returns, as illustrated in the diagram below. We talk about this more in our Introduction to QA Strategy guide.

Line of Diminishing Returns: From Most Productive, to Diminishing Returns, to Negative Returns

Instead of adding more tests, it might be more important to just run the tests you already have more often. 

Use the Snowplow Strategy to Improve Test Coverage with Rainforest QA

Using the Snowplow Strategy for improving test coverage will make sure that all the time you invest in building out your test suite will translate to improved test coverage and improved quality, not just busy work for your QA team. 

It will save you time building tests, running them, and maintaining them, and help you make sure that critical bugs don’t make it through to production. 

Looking for an easy-to-use, no-code automation tool to help improve test coverage? Try out Rainforest QA for free.

Related articles

From 3 Weeks to 3 Hours: How Signagelive Sped up Regression Testing by Switching to Automation

Learn how Signagelive built 500 tests in record time and dramatically sped up development cycles with automated tests.

10 Codeless Test Automation Tools to Speed Up QA Testing

The right codeless test automation tools make it easy for anyone to speed up their QA process without trading quality.

A Detailed Comparison of Cypress vs. Selenium vs. Katalon Studio vs. Rainforest

Learn the differences of Cypress vs. Selenium vs. Katalon in trying to solve the shortcomings of Selenium.

Selenium Alternatives: 7 QA Tools to Consider, Including a No-Code Option

These seven Selenium alternatives are excellent options, particularly for those looking for a no-code solution.