Even though automated testing helps you do more software testing in less time with fewer people, maintaining your test suite can be very time-consuming.
Many QA teams have a hard time keeping up with maintenance as their product grows. If they fall behind, they get more and more false positives (i.e., cases where the test fails because of a problem with the test, not a problem with the application). Eventually, they may lose so much confidence in the validity of their test results that the tests become useless, and they revert to manual testing.
The best way to avoid falling behind is to incorporate automated test creation and maintenance into your release pipeline, especially when working in a continuous integration pipeline. This means that someone on your team, whether it’s a developer or a dedicated QA specialist, must create automated tests to cover any new features and fix tests that broke because of changes to the application before the feature goes live.
If you can’t manage to fit test automation maintenance in before release, then another option is to make it a mandatory step before any new features can be built. Either way, you’ll need to make maintenance efforts a priority.
Once maintenance methodologies are built into the software development cycle, implementing a few best practices into your testing efforts can significantly speed up maintenance.
In this article, we talk about eight best practices that will help prevent tests from breaking in the first place and help your team spend less time fixing tests when they do break.
Once you know how to automate testing with maintenance best practices, sign up for Rainforest QA to save even more time and money on test automation—you can run up to five hours of no-code automated tests for free, every month. It’s only $5/hour after that.
1. Write Fewer Test Scripts
2. Don’t Test Features That Are Still in the Beta Phase
3. Use a Tool That Tests the Visual Layer, Not the DOM
When considering what test cases to include in an automated test suite, we like to use our Snowplow Strategy for automation test coverage.
Think of all of the possible user paths through an app like a city map with hundreds of streets. After a blizzard, snowplows work to clear the most trafficked streets first because they affect the most people. The side streets may get plowed later, but in large cities, some streets never get plowed.
Likewise, you should prioritize your testing process around the most important user paths to make sure they are working properly.
After each new software release, you’ll need to create automated tests for the most critical user paths through the new features, and you may need to update the regression test suite for the app as a whole.
A good way to identify the ‘most trafficked streets’ of the new feature is to model your test suite after the user flow the design team originally gave the developers. Some testers just start with the live web application and build automated test cases at random, without considering which user paths are the most critical. When they do that, they often catch bugs that aren’t high enough priority for developers to fix. And they also end up missing crucial bugs.
For more details on how to improve and simplify your automated test coverage, read this article.
When a new feature has been released to a few of your customers, but it’s still in the beta phase, it’s not ready for automated regression testing or end-to-end testing. The only types of tests that should be automated during the beta phase are unit tests, and possibly integration tests to verify basic functionality of the feature.
Automated end-to-end testing should wait until you’re ready to turn the whole feature on in production because the product is still changing too often at this point.
If you try to create automated end-to-end tests during the beta phase, the tests will end up breaking frequently as developers tweak the product.
Most automated testing tools (except for Rainforest QA) test the underlying code of an application (i.e. the DOM) instead of the visual layer.
Rainforest QA is the only software testing tool that creates UI tests using pixel-matching rather than code-based locators. This means you’re testing what the user will see rather than what the computer thinks is happening.
We go into more detail about why we think testing on the visual layer is better here.
In short, there are a variety of situations where the code might be present during test execution, but the element won’t show up to the user.
Selenium-based tools won’t catch these errors, and additionally, they tend to break easily from minor changes to the code that don’t affect the user interface.
Rainforest tests aren’t affected by changes to the underlying code if they don’t affect the UI.
Here’s an example that illustrates this key difference between Rainforest and other testing tools.
To create (or edit) any test step in Rainforest QA, you select a preset action (like “click”, “select”, or “type”) and then click-and-drag to select the element you want to apply the action to.
In the above example, a Rainforest test will search for a group of pixels that matches the ‘Try for free’ button. If the line of code for this button gets slightly changed—if the locator ID gets renamed, for example—a Rainforest test won’t break as long as the button still exists on the page, whereas other tests will break.
If code changes do affect the UI, a Rainforest test may break, and in many cases, the failure will be because of a real issue that affects the user.
4. Use a No-Code Tool for Test Automation
5. Use Embedded Tests Sooner Rather Than Later
6. Use Video Recordings to Find the Root Cause of a Break
7. Use Suggested Fix and Text Matching Features
8. Use the Test Writing Service
If you’re using an open-source test automation framework like Selenium or Cypress for writing test automation scripts, you’ll need a QA engineer with programming skills to build tests and fix them when they break.
And while many no- or low-code tools allow anyone to create tests, they often require a QA engineer to fix breaks. Because Rainforest tests are created by choosing your action and taking screenshots (as we showed in the previous section), anyone (even a non-technical person) can create or maintain a test using Rainforest QA, and no one ever needs to learn a scripting language like Selenium.
That means anyone in your company can do QA. Having an accessible QA tool frees up engineers to focus on building or updating new features. Even if you do have programming skills, using a no-code tool that anyone can use is usually faster than manually writing and maintaining every test script.
Rainforest QA has an embedded test feature that saves you time during test creation and test maintenance by cutting down on writing repetitive test sequences.
For example, here’s a test we created called “Rainforest Signup Flow”:
This test covers a basic signup sequence that we use in a lot of our other tests. Here’s an example of how to embed that signup flow into another test:
From there, you can add additional steps to create a test for whatever you need. Then, anyone can modify the steps in “Rainforest Signup Flow” (in response to changes in the app or to fix a bug) and it gets applied to every other test that uses that signup flow.
Without embedded tests, if you find a bug in a login sequence, for example, you would have to fix the bug in every test that uses that login sequence. With embedded tests, you only have to fix the bug in one place and it gets applied across the board.
As you’re writing tests, it’s a good idea to try to recognize any repeated sequences sooner than later to cut down on the number of tests that need to be updated.
Let’s say you realize you’ve written the same five steps for the 10th time that day, so you create a test for those five steps that can be embedded in future tests. You’ll still have to go back to existing tests and replace the steps you wrote previously with the embedded test.
This may seem like extra work in the moment, but in the long run, it’ll save you a large amount of time. If you can notice a pattern after the 10th repetition rather than after the 50th repetition, you’ll be able to save even more time.
Determining why certain tests failed is often one of the most time-consuming aspects of automated testing—particularly when using software that tests the underlying code.
Open-source frameworks like Selenium or Cypress have ways to capture a screenshot of the UI and/or a snapshot of the underlying code at the moment a test breaks. These can sometimes help identify the root cause of the test failure, but in many cases, the error happened sometime earlier in the sequence before the screenshot.
Let’s say you go through the steps of adding an item to a cart, then you go to checkout, and there’s nothing in the cart, which causes the test to fail. A screenshot will just show you an empty cart, but you won’t know whether it’s empty because the item never got added or if the app just failed to display the item in the cart.
A screenshot will tell you where a break happened but it won’t tell you why. Sometimes you even need to compare a failed test to a passed test to find out why a test failed. That’s why Rainforest records a video of every test (whether it passes or fails).
By watching these recordings, you’ll know the reason for a test result without spending hours searching through strings of code.
Test results from Rainforest also include browser settings, network traffic, and other factors that could help developers fix bugs faster.
Rainforest QA has a built-in suggested fix feature that makes it quick and easy to repair broken tests. The suggested fix feature is triggered anytime a screenshot can’t be found, even if the test passed.
The element that can’t be found is highlighted and you can quickly take a new screenshot to fix the bug.
Rainforest’s text matching feature is another way to help speed up maintenance. Text matching examines the content of an element rather than the appearance. For example, the buttons below both say “Buy Now” even though the colors and shapes are different.
If text matching is enabled, the test will pass with either version of the button. If the text locator is not enabled, it will only pass if your original screenshot matches the version being tested.
This is helpful if you’re only interested in testing the content of the button rather than the design.
Rainforest QA’s test writing service can help you create new tests or update broken tests if you don’t have the time to do it yourself. You can submit a batch of up to 20 tests and our QA experts will return results in five days or less.
This takes the pressure off of your team and helps improve software quality while still meeting every deadline.
With Rainforest QA, anyone can automate and maintain tests without learning a new programming language or buying additional services from other vendors.
Rainforest QA helps your team cut down on maintenance time, allowing your team to keep pace with agile software development teams and fast release schedules in a CI/CD pipeline.
It’s a scalable, all-in-one quality assurance solution that’s appropriate for small teams just getting started with automated testing or QA-mature teams regularly running 500+ software tests.
You can try Rainforest QA yourself—get started for free.
Rainforest’s CTO and co-founder, Russell Smith, and CIO, Derek Choy, recently discussed test automation limitations technical leaders need to know about before biting the bullet. In this post we cover the 3 biggest limitations to consider.
We provide practical guidance on how to start automation testing from scratch and how to choose a test automation tool.
Today, we’ll discuss the pros and cons of using testing automation as part of your QA strategy, and how to gain the most from its benefits while minimizing its downsides.
The right codeless test automation tools make it easy for anyone to speed up their QA process without trading quality.