Manual regression testing is time-consuming, costly, and difficult to scale as your team grows. As you add more features to your product, you have to hire more people and spend more time completing your regression test suite in every software release cycle.
Automating your regression test suite can help your team scale up testing without adding more headcount. Additionally, with Rainforest QA, anyone on your team can create and manage automated regression tests, meaning you don’t need to hire a QA engineer.
In this post, we go into the details of using Rainforest QA versus other regression testing tools that use programming languages (i.e. Selenium, Cypress, or Appium) to automate regression testing. We also answer the following questions:
Ready to automate your regression testing? Sign up for a free 14 day trial with Rainforest QA.
Regression testing often gets confused with smoke testing, sanity testing, and re-testing.
If you’re using a no-code automation testing tool like Rainforest QA, test writing and test execution will look the same. The difference between these kinds of tests is why you run them, when in the development lifecycle you run them, how many times you run them, and the number of tests in a run.
As we define these types of testing, we’ll use the example of an e-commerce company whose development team is working on an update to give their search button the ability to autocorrect misspelled words.
Regression testing is a series of tests done right before releasing a new feature or product to ensure that critical functionalities of the app are still working as a whole.
The number of tests run during regression testing varies (the next section goes into detail about what to include in your regression tests), but, in general, it will have more tests than the other types of testing mentioned below. Regression testing is only done once for each new change. If no bugs are found, you can move on to release. If bugs are found, you do re-testing.
For example: This would include multiple tests that cover every user path through critical, existing functionalities such as searching through product categories (e.g. women’s shoes, men’s shoes, etc.), adding products to the cart, viewing the cart page before checkout, checking out, signing up for promotional emails, creating an account, logging in to an existing account and clicking through other main sections of the website (e.g. FAQ page, About page, etc.).
Re-testing is the process of repeating any failed regression tests (and any regression tests for closely related functions to the failed test) to ensure the bugs were fixed. It also ensures the fix didn’t create any new bugs. Re-testing is repeated until all the bugs are fixed.
For example: A bug made it so that any name entered for a credit card was rejected. Once the bug was fixed, test scripts for adding products to the cart, viewing the cart page before checkout, checking out, and creating an account were re-run.
Sanity testing is a smaller series of test runs done before regression testing to test the critical functionalities of a new feature and directly related features.
For example: One sanity test would be to see if results will appear if you search for something and if you’re able to click links in the results list.
Smoke testing is an even smaller set of tests done before sanity testing to ensure the basic functionalities of a new feature are working.
The goal is to identify major bugs in the new code at the earliest possible point in the development cycle to prevent time from being wasted on builds that eventually have to be scrapped.
For example: This would include testing whether you can type something in the search bar and have results appear.
In this section, we share details about our methodology for choosing which test scripts to include in a regression suite and the benefits of running fewer tests.
For every test you run (even when it’s automated), there’s setup time, run time, and time spent analyzing results. And every time you make changes to your app, you’ll have to make updates to some of your automated tests to reflect those changes.
If you try to create (or have a computer create) tests for every user flow, edge case, and weird scenario inside your app, you’ll end up with hundreds or even thousands of regression test cases. It would be nearly impossible to even analyze all of the results from that many tests, let alone maintain that large of a test suite.
The time spent on all of these tests may be worth it if all of the tests are essential to your team's success — but not all tests are created equal.
Let’s use the e-commerce business as an example again. If there’s a bug causing the total purchase amount at checkout to be hidden, all customers will be affected. If there’s a bug that won’t let customers use non-roman letters for the name on their credit card, only a few customers will be affected. You may want to fix the ‘non-roman letters’ bug soon, but fixing the ‘total amount at checkout’ bug before release is necessary regardless of how tight your deadline is.
We’ve developed a technique for getting the maximum benefit from automated regression testing called the Snowplow Strategy. If you think about your app like a city, then all the possible user paths through the app are like all the highways, roads, and side streets through a city. When it snows, the snowplow crew follows a predefined route through the city to clear the most trafficked streets first because they affect the most people. Likewise, your regression testing strategy should prioritize features and user paths that affect all or most of your users.
You can read more about how our customers use the Snowplow Strategy to improve their test coverage in this article.
Instead of running every test you can think of, you can save time by focusing your regression testing strategy on tests that cover critical functionalities at the level of granularity that’s important to you.
Here are some questions that will help you hone your regression test suite:
Automating as few as 5 or 10 tests for critical path functions will immediately start saving your team time. After the initial set of tests, you can build new tests as you’re able.
Rainforest QA is our no-code quality assurance platform designed to speed up your release process without adding headcount or letting quality suffer.
Here are four reasons why Rainforest QA is the best option for automating regression testing rather than using other code-based regression testing tools:
With most code-based platforms, setting up an automated regression test suite is costly due to the amount of time it takes and the level of expertise required (i.e. a QA engineer’s salary).
Rainforest QA changes that by giving anyone the ability to create tests and contribute to quality assurance — no technical background required.
To create (or edit) any test step in Rainforest QA, simply select a preset action (like “click”, “select”, or “type”) and then click-and-drag to select the element you want to apply the action to.
Once you’ve created each action along the user path you want to test, you can play back the actions you created to verify that the test will do what you intended.
Then, when you’re ready to test, you launch it with the click of a button, whereupon our automation service will run your test(s) on a virtual machine in our cloud.
For a more detailed look into how to create a full test, check out this 4-minute video.
Note: You can also use our test writing service and have our group of Rainforest experts build a suite of 20 tests for you within a week.
Not only is it easy to create a test, it’s also easy for anyone to read the test and know exactly what’s going to happen (see the left-hand column below).
Rather than testing behind-the-scenes code like most regression testing tools, Rainforest tests the visual layer of the UI by matching pixels.
By testing the pixels, we’re testing what the user will experience (rather than what the computer sees behind the scenes). Because Rainforest tests the visual layer of the application, its tests are less susceptible to breaking due to minor code changes that don’t affect the UI.
If you’re not using Rainforest QA, your options will be to write the tests yourself using an open-source testing framework (i.e. Selenium, Cypress, or Appium), or use another no-code tool. If you write the tests yourself, you’ll need a QA engineer and a software testing grid (such as Sauce Labs or BrowserStack) which quickly adds cost.
If you use other no-code tools, you’ll find they’re mostly just variations on easier ways to generate test code. This means they only test the backend code layer rather than the UI.
We go into more detail about why we think testing the UI is better here, but in a nutshell, it’s because this type of testing is the most representative of what users will actually experience.
If you want to learn more about how to automate any test, read this tutorial.
Rainforest QA is an all-in-one test automation solution, which means it has built-in tools to allow you to create and manage any number of automated tests and run them in parallel without paying for things like a software testing grid or test case management tools.
Rainforest runs tests on our extensive network of cloud-based virtual machines. You can choose from current and older versions any of the four major browsers (i.e. Safari, Microsoft Edge, Chrome, and Firefox) on either desktop or mobile.
Our automation service is the fastest, least expensive option, but we also offer a team of professional QA testers if you need to run tests that can’t be automated or that could benefit from subjective, human feedback.
You can also design a test that leaves the browser and interacts with other apps. Here’s an example video of a test that saves a file to a desktop and then uploads the file to Google Drive. This feature is useful for validating downloads or uploads.
With Rainforest QA, you can scale your testing up or down at any point and only pay for the tests you run.
Most other test automation tools have tier-based pricing, meaning they charge you a fixed rate for a given number of tests each month — even if you run fewer tests than the tier allows. More often than not, this means you’ll be paying for more than you need or you’ll run fewer tests than you need to avoid the higher price.
Rainforest QA makes it easy to understand why any test fails by recording a video of every test (even ones that pass). Within video recording, you can see what action failed and why because the action gets highlighted in red and a brief message appears stating the reason it failed.
This lets anyone read and understand test results. If the test failed because the test itself was broken (e.g. due to a product change), anyone can jump into the visual test editor to fix the test steps. If it was a failure caused by an actual bug, you can add tickets to Jira with one click or download the relevant video recording and HTTP logs for your engineering team.
With an open source test automation framework like Selenium, it often takes a lot of time to figure out why and when a test failed. Testers may have to recreate every test step to figure out what caused a failure. Some software development teams add code to capture a screenshot and/or the underlying code at the point of failure, but this doesn’t always reveal the cause of the bug.
In Rainforest QA, you can quickly embed one test in another. This means you can create just one test for signing up, for example, and embed that test in every other test that requires a login step. This saves a ton of time when building your regression suite and throughout the testing process.
Here’s an example of how we embed a ‘Rainforest Signup Flow’ test into another test (it includes every action from clicking the ‘Try for Free’ button through verifying that a ‘Successful’ message will appear):
If any steps in the signup flow need to be updated (due to a product change), we can update the test in just one place and it’ll automatically be updated in every single test that has the “Rainforest Signup Flow” test embedded in it.
Other software options (code or no-code) also can create common sets of steps that can be called on for any test, but you’ll be reliant on a QA engineer to fix any problems that arise.
Most of the time spent on regression test automation will be setting up your initial test suite. However, you’ll still need to set aside time to maintain your test suite and evaluate your testing program overall.
In addition to updating existing tests that break because of new changes to the app, you’ll also need to keep expanding your regression test suite as your app grows.
Tests are most likely to break after a major software update. Although some breaks (due to changes in the UI) are inevitable, Rainforest QA will identify many of these errors and make smart suggestions that allow you to fix tests with one click. (You can also use the Test Writing Service to help update tests after a major release.)
As you expand your regression test suite, you may start to wonder how many tests is enough? And how do I know I’m testing the right things, at the right level of detail?
Simply using the number of tests in your regression suite as an indicator of quality is ineffective because it doesn’t tell you whether your critical functionalities are being covered or how much individual bug fixes are costing you. The following questions will help you determine if critical functionalities are being covered at the level of granularity that is important to you:
Get Started with Automated Regression Testing with Rainforest QA
With Rainforest QA, anyone can automate regression testing without learning a new programming language or buying additional services from other vendors. It helps QA teams do continuous QA, helping them keep up with agile development teams and teams doing continuous integration and continuous delivery.
It’s a scalable, all-in-one solution that’s appropriate for small teams just getting started with automated testing or QA-mature teams regularly running 500+ software tests.
You can try Rainforest QA yourself by starting a free 14-day trial.
We provide a detailed review with recommendations of the top 10 software testing tools available today.
In this post, we define what 'smoke testing software' means, provide specific examples of the types of tests to include, and discuss a few best practices.
This guide covers the advantages and disadvantages of the most common types of automated software testing tools.
These 10 Testim.io alternatives for functional software testing can make test writing and maintenance easier.