Flaky tests are automated software tests that sometimes pass and sometimes fail without an obvious reason. Often these tests will work well for a while, then occasionally start to fail. If the test passes on a second or third try without any obvious reason for the failures, the tester typically chalks it up to a glitch in the system and ignores the failed test result.
If the test continues to perform inconsistently, the tester may stop running that test altogether, losing potentially critical test coverage.
Either way, if failed test results are being ignored or tests aren’t being run, real bugs can get missed.
While it may seem like nothing has changed between passing and failing test runs, in reality, there’s always something that changed to cause the failed test result.
This article will help you:
Note: If you’re a current Rainforest user and you’re looking for troubleshooting tips, see this article.
It’s easier to find the root cause of automated test failures with Rainforest QA than any code-based testing framework. Sign up for Rainforest QA’s no-code automation to see the difference. Run up to five hours of no-code automated tests for free, every month. It’s only $5/hour after that.
In an ideal world, you’d be able to test your application in the exact same test environment and return the exact same results every time. While some testing software can get close to this ideal (by using virtual machines with standard configurations instead of real devices, for example), the reality is that it can be very difficult to control every aspect of your test run.
This is especially true for end-to-end (e2e) tests because there are so many moving parts and dependencies. Some teams believe it's nearly impossible to mitigate flaky tests with automated e2e testing and therefore skip automating UI tests altogether. But e2e testing plays an essential role in ensuring a high quality user experience. Even if automating your e2e tests means you encounter a few flaky tests, automation can still significantly improve the speed and repeatability of your testing.
Plus, there are ways to mitigate flaky tests.
The first step is to understand what can cause flaky tests. Even though the term ‘flaky test’ may suggest it’s always an issue with the test that causes inconsistent results, there are many other factors that could cause inconsistent results.
Three of the most common causes of test flakiness, other than issues with the test itself, are:
When a flaky test fails, it may be because of an issue in the test environment that won’t carry over into the production environment. However, some issues such as network errors, slow load times, or problems with third-party APIs could carry over and ultimately end up affecting the end user. If you ignore flaky test results, there’s a good chance you’ll be ignoring real problems.
While ignoring failed flaky test results isn’t a good idea, it’s not always practical to spend a lot of time troubleshooting inconsistent tests. There are several options when it comes to handling flaky tests and it may be helpful to use all of them in different situations. However, it’s important to make sure you’re deliberately choosing the best response to maintain your team's standards of quality assurance (QA).
Your options for what to do with flaky tests include:
Regardless of how you handle flaky tests, it’s important to keep track of which tests produce inconsistent results, how you handled each failed test result, and the reason for the test failure whenever possible. Documenting each flaky test and what you did about it helps you and your team maintain faith in the test suite and develop your own best practices to prevent flakiness. It also helps you notice recurring patterns that could potentially be resolved.
For instance, if your app is loading slowly in the test environment, consider upgrading your test machines and the associated components.
To find the root cause of flakiness, you have to determine what’s changing between test runs. But this can be a tedious task without the right tools.
With most open-source and code-based testing tools, you’ll end up sorting through lines of code in order to understand why some tests pass and others fail, which can be very time-consuming. If testing is normally handled by non-technical QA team members, the task of discovering why each test failed will have to be handled by someone outside the QA team — in most cases, a developer. This means finding the root cause of flaky tests could create a bottleneck in the software development lifecycle.
Rainforest QA solves this problem by making it much easier and faster for anyone to understand why a test failed, providing video replays and detailed test reports for every test.
Instead of using code to test code, Rainforest QA uses an intuitive visual editor to create test cases. To write or edit any test step, you choose an action (such as “click” or “fill”), then click-and-drag the mouse to take a screenshot of the element you want to apply the action to.
Looking at the set of steps in the screenshot below, anyone can follow along and understand what’s happening in the test:
And if a test fails, the test step that failed during a test run will be highlighted in red along with a brief message describing the failure:
For failures that have a less obvious cause (as is often the case with flaky tests), you can investigate further with:
If the root cause of the failure is an actual bug, Rainforest QA offers a Jira integration so you can automatically create a ticket for the development team. The ticket includes the failed test steps, a screenshot of the failed test step, HTTP logs, and a link to the full test results and video recording in Rainforest. Rainforest also integrates with Slack and Microsoft Teams, so you can get instant notifications of any test failure.
Although it’s difficult to completely eliminate flaky tests in automated testing, there are ways to minimize the number of flaky tests you run into:
Finally, good test data management can help mitigate flaky tests. As we mentioned above, if multiple tests use the same set of user data, then running tests at the same time could create collisions. To avoid these collisions, you have a few options:
If you have multiple tests that use the same user account, you’ll want to use reset protocols. Whether you reset the testing environment before every test or add steps to your tests to revert to a default state, resetting is important for reducing inconsistent test results.
For example, let’s say you have a test that verifies a user can change their username. If the username isn’t subsequently reset to the original username, every other test that uses that username will fail.
With Rainforest QA, anyone can quickly figure out if a test is flaky or permanently broken, or if the software has a bug. It’s a scalable, all-in-one quality assurance solution that’s appropriate for small teams just getting started with automated testing or QA-mature teams regularly running 500+ automated software tests as part of their CI/CD pipeline.
Get started with Rainforest QA for free. You can run up to five hours of no-code automated tests for free, every month. It’s only $5/hour after that.
These 10 Testim.io alternatives for functional software testing can make test writing and maintenance easier.
Learn how Rainforest QA makes desktop application testing easy with its no-code visual testing platform.
Discover 7 Selenium alternatives for easier automated testing: Rainforest QA. Cypress. Screenster. Watir. Protractor. Katalon. Rapise.
We provide practical guidance on how to start automation testing from scratch and how to choose a test automation tool.