For teams following agile software development practices, regression testing is a must. Agile teams constantly make changes to live software that can introduce regressions (or, code changes that break the functionality of part of an app).
Regression testing can keep teams from shipping critical bugs to production by confirming the most important parts of an app are still working every time new code is pushed.
However, regression testing has developed a bit of a reputation as being difficult to execute in an agile environment without slowing down sprints.
There’s a few reasons for this perceived tension between agile methodology and regression testing:
Luckily, all of these issues can be resolved with modern automated regression testing tools and strategies.
In this post, we’ll cover the basics of regression testing in an agile development environment, and then we’ll share five strategies that help teams overcome common challenges associated with doing regression testing in agile.
You can use these links to jump to a relevant section:
We’ll also walk through some of the ways our tool, Rainforest QA, makes it easier than other automated testing tools for agile teams to fit regression testing into sprints. You can try Rainforest QA for free here. Our Professional plan provides up to five hours of no-code automated regression tests for free, every month. It’s only $5/hour after that.
Before we dig into the details of how to execute regression testing in agile, let’s clarify one thing:
Regression testing is meant to complement other forms of automated software testing.
Regression testing is a form of black box functional testing that evaluates whether a specific input in your app leads to a predictable, consistent output. Because regression testing doesn’t necessarily directly test the accuracy of the code, it’s still important to at least run unit tests before doing any regression testing.
When developers are writing code for new features, it’s hard to keep in mind all the intricate ways the new code could interact with existing code. Attempting to avoid any possible UI errors slows developers down, making it hard for agile teams to meet their deadlines in each sprint. In many cases, it’s impossible not to introduce at least some errors.
But with a reliable regression test suite, a developer can test the compatibility of a new feature with all existing functionalities before they check the branch back into master. If the tests find new bugs, they can fix them before merging the branch. In this way, regression testing becomes a safety net that helps developers focus on the new functionality they’re building.
Most developers today wouldn’t dream of checking in a new code branch without doing unit testing first. Similarly, when new developers start at Rainforest, they can’t believe they ever merged a branch or pushed code to production without regression testing the UI first.
One problem agile development teams run into with regression testing is waiting for a siloed QA team to write, run, and interpret tests. If the QA team gets behind on test creation, regression testing can easily become a bottleneck.
For this reason, we believe a better approach is for developers and QA specialists to work closely together in a hybrid team to share the ownership of regression testing.
When the QA team is tightly integrated into the product development process, they can start planning their test coverage and writing tests much earlier in the development lifecycle than if they are siloed.
In that scenario, the QA team members can own the test planning and writing, and the developers can own test interpretation and maintenance.
As soon as new features are live on some pre-prod branch, the QA team can start writing tests to cover the new user paths. When the developers check in a new branch, they can run the existing regression suite with one click. If any of the tests fail, they can quickly determine if the failures represent real bugs or if the tests just need to be refactored.
Since the new code they’re checking in is fresh on their mind, it’s usually easy for them to identify why a test failed and refactor it quickly.
However, traditional test automation tools have made this setup unworkable for many companies because of the technical barrier to entry.
Popular test automation tools like Selenium and Cypress require programming skills to use (and lots of practice to program well). Many software startups can’t afford the extra developer headcount or time necessary to build the test suite. And for those who can afford dedicated QA engineers, testing often still slows down the release process because it’s so technical and difficult to manage.
A no-code automation testing tool (especially an all-in-one tool like Rainforest) changes the game by making it easy for anyone to write, run, and maintain tests.
In fact, Rainforest QA is the only no-code automation testing tool that lets you write, run, and maintain tests without writing a single line of code.
Developers and non-technical team members alike find it much faster to work with a no-code tool than to write traditional test automation scripts.
The brief video below shows what it looks like to create tests in Rainforest’s visual editor:
You choose from a dropdown menu of preset actions (such as “click”, “select”, or “type”). Then, you click and drag the cursor to take a screenshot of the element you want to apply the action to.
You can learn all the basics you’ll need in just a few minutes.
Finding the right cadence for running your full regression testing suite can take some experimentation. Not every code change is significant enough to justify running your full suite.
But in general, the more frequently you run the suite, the less time it will take to evaluate the results and fix any bugs—since you’ll be more familiar with the features in the new build and because the volume of code you’re testing is smaller.
Ideally, you should run your full regression suite every time you merge a branch back to master. If that’s not possible, you should still follow a few rules of thumb. You should always run your regression suite in the following situations:
Another reason agile teams sometimes struggle with regression testing is because they try to achieve 100% coverage of their app. While there’s no magic number for how many regression tests you need, most of our customers feel confident they’re catching all the most critical bugs if their regression suite covers between 60 and 80 percent of their user paths.
There’s a line of diminishing returns with any kind of automated testing. If you’re not testing the right things, or you’re testing too much, you’re doing work that isn’t bringing value to the team.
When deciding which user paths to test, we like to use the Snowplow Analogy.
Think of a snowplow route through a major city. Your user paths are like city streets. Some of them get more traffic than others. Just like a snowplow works to clear the most trafficked highways before touching the side streets, you should cover your most trafficked user paths with regression tests before you worry about edge cases.
Some teams even use traffic data from Google Analytics to help prioritize which paths to test (we explain how to do that in this article).
Once you build out your regression suite, you may realize that the full suite is too large to run in its entirety in every situation. In that case, you’ll need to do some form of selective re-testing.
It may be possible to identify a subset of your test suite that covers all the user paths that the latest changes are likely to affect. If that’s not possible (and it’s often just too hard to predict which user paths will be impacted), another option is to divide your test suite into priority levels based on the importance of the features they test.
If you’re pressed for time, you can run only the top priority tests. These would include anything related to logging in, making payments, and one or two of the most popular features.
It’s easy to add priority levels to tests in Rainforest. For each test, you can attach a priority attribute of P1, P2, or P3. You’ll find this option in the settings within each test (see screenshot below). These markers are strictly there to help you filter and categorize tests and don’t affect the order in which tests run in Rainforest.
Now that we’ve covered the basics of regression testing, we’ll share five best practices that we’ve honed from helping dozens of agile teams develop effective automated regression testing strategies with Rainforest.
In general, automation is a great fit for regression testing because regression tests get run frequently and tend to be repetitive. And automating your regression suite is the key to making sure regression testing doesn’t slow down sprint cycles.
However, not all regression test cases are appropriate for automation. If a feature is still in beta and developers are making frequent changes between each iteration, it’s not stable enough for automation. And if a test requires human interpretation (e.g. if the feature includes a CAPTCHA), you’ll need to hand that off to human testers.
We share more details about optimizing manual and automated testing in this article.
Rainforest offers a crowd testing platform to run any test cases that aren’t a good fit for automation. You use the same dashboard to manage both manual and automated tests and just choose human testers when needed.
A false failure, also called a false positive, is when a test fails because of an error in the test, not an error in the application.
When a UI change breaks the test, the test will continue failing until someone refactors it to match the new UI. If you don’t fix broken tests, your test suite will become out-of-date and start returning more failures than passes, and failure categorization will take a long time. If the regression suite becomes stale, your team will lose faith in it and may stop using it altogether.
With any automation tool, some false failures are inevitable, but the tool you choose will make a big difference in why tests break and how easy it is to fix them.
Rainforest QA offers several features that make test maintenance easier than with tools like Selenium or Cypress.
Rainforest QA tests are much less susceptible than other tools to breaking from minor code changes that don’t affect the UI. This is because Rainforest tests interact with the visual layer of the app, like a real user would, instead of interacting with the underlying code (the DOM).
Let’s say a developer renames the CSS class of an element to match updates in the product. This kind of change wouldn’t affect the user experience at all, but it could cause tests created with Selenium or Cypress to fail because they often use the names of elements to find objects during tests.
With Rainforest, instead of using the name of the element to locate it, you take a screenshot of the element. The test will search the whole page for the pixels that match your screenshot. As long as Rainforest can find that exact screenshot on the page, the test will pass.
Rainforest tests only break if there’s a change to the visual layer of the application, which is more likely to happen when there’s a real bug that will affect the user experience.
That said, Rainforest tests are not totally immune from “false positive” failures in that a minor UI change, such as the shape of the button changing, for example, may break the test.
To help with this, Rainforest QA also offers a text matching feature, which you can turn on or off in any test step. This way, even after a minor UI change, Rainforest can still find the text in the ‘Try For Free’ button and successfully complete that step.
Every test, whether it passes or fails, gets recorded in Rainforest to make it easy to see whether a test failed because of a problem with the test or an actual bug in the application.
If it fails because of a legitimate bug, you can identify the bug and send a ticket to your developers. And if it fails because the test is broken, it’s easy to identify what needs to be fixed in the test because you can see the failure in the context of the whole rather than as an isolated event.
The screenshot above shows a test that has failed. The video replay is on the right, and you can see on the left that the test failed during Step 7 because it couldn’t find a visual match for the element labeled “Success Message”.
The video recordings also capture more than just what a human would see during the test. We record data about the browser settings, the network traffic, and a variety of other factors that could affect the software performance. This data can help developers fix bugs faster.
Since Rainforest tests aren’t written in Selenium, Cypress, or any other code-based testing framework, anyone can edit tests when they need to be updated to reflect a change in the UI.
In most cases, all you need to do is update the screenshot of the element that changed—no hunting through code for the selector you need to fix. But if you need to make changes to the sequence of steps in the test, or add new steps, you can do all of that in the visual editor without re-writing or re-recording the test.
For many test failures, Rainforest will suggest fixes automatically when it recognizes that the existing screenshot is no longer accurate. Most users find they can edit tests in a few seconds with these features.
With other codeless tools that use a web recorder to create tests, you often have to completely re-record the test to fix errors. Most of them use some form of AI to try to replace missing locators, but if that doesn’t work, you’ll either have to manually edit the Selenium code or you may have to recreate the whole test from scratch.
Note: For more best practices to reduce test automation maintenance, read this article.
One of the biggest benefits of regression testing in an agile context is the ability to get fast feedback about how your latest build impacts existing features. The best way to get this feedback is to integrate your regression suite into your release cycle.
Rainforest QA offers an API, CLI, and a CircleCI Orb, so developers can kick off a suite of Rainforest tests along with their unit tests and integration tests.
A hidden benefit of this continuous testing approach is that it encourages everyone on the team to keep the test suite running as smoothly as possible, because everyone is equally affected by an out-of-date test suite.
Once you’ve been doing automated regression testing for a while, you may start to notice that your test suite contains some tests that are longer than they need to be or cover obsolete features.
Let’s say you had a test that took 12 steps to navigate to a feature hidden deep in the product, but then in the latest update of the application, that feature now has a shortcut from the dashboard. Even though the test can still pass in the 12-step version, it’s now taking an inefficient path. Having unnecessary steps in a test increases the likelihood of the test breaking, and means it takes longer to run the test and find the reason for a failure.
If an existing test case covers an old feature that no longer represents a core business function, then it might be worth purging the obsolete test case to keep the overall run-time of your test suite down.
Remember: More test cases doesn’t always mean better test coverage.
As you develop your workflow around running tests, categorizing failures, and fixing broken tests, you may find that your team can’t afford to fix every broken test after each test run. Maybe one team member sees that 97 out of 105 tests passed, and they can tell that the failures are all because of broken tests rather than real bugs, so they push the code to production and move on.
When the next team member tries to run the regression suite, those eight tests will fail again. Running tests that you already know are broken can be costly. If you’re using Rainforest, you pay for every second of run-time. But no matter what tool you’re using, repeatedly running broken tests is a bad idea because it slows down your test suite and makes more work for everyone.
A good solution to this problem is to quarantine broken tests.
Rainforest has a simple way of letting you do this. When you look at your list of tests, you can select the ones that failed during the last run (marked by an orange “no” symbol under the column “Last Result”) and disable them with the “Pause” button in the top left corner of the page.
They’ll drop to the bottom of your list in a ghosted out color, and they won’t run until you enable them again.
As your application grows, it usually becomes necessary to prioritize your regression testing and only test select feature groups after each small change (as we mentioned before, this is called selective re-testing).
There are a few different strategies for deciding what to test. It's a bit risky because it’s tricky to predict which features could be affected by each change, but bringing a variety of stakeholders into the decision can help.
But to actually pick and choose, you need to be able to easily find the tests that cover the features you’ve prioritized. You don’t want to have to open each test and dig through the steps to see whether it covers the ability to delete a file, for instance. This is where naming conventions come in.
Rainforest makes it easy to categorize tests according to features and Run Groups, but you still have to choose a name that accurately describes what features are being tested, for example, “New Workspace - Add Another Teammate.” Make sure everyone on the team agrees on how to name, describe, and organize tests.
An accessible test automation tool is the key to making regression testing compatible with agile development. With Rainforest QA, anyone on your team can build, run, interpret, and fix automated regression tests for web applications to ensure that regression testing doesn’t become a bottleneck that slows down sprints.
Sign up for Rainforest QA to start building out your automated regression test suite—you can run up to five hours of no-code automated tests for free, every month. It’s only $5/hour after that.
Many quality assurance metrics evaluate QA activities rather than results. These are the 5 most helpful QA metrics.
Learn how to write a software test plan. Get started with our practical guide and free template.
A comprehensive overview of DOM-based automation's limitations, and why UI testing may be superior for most testing use cases.
The landscape of software testing is changing. Speed and quality are no longer seen as opposing forces.