At Rainforest, we believe QA doesn't have to suck.
Over a decade of helping startups improve quality, we’ve arrived at a set of principles that work for teams shipping fast and frequently.
We’ve built Rainforest to make it easy for you to put these principles into practice.
The people who build the product — particularly product managers and developers — are incentivized to ship and have the context to make informed tradeoffs between speed and quality. Siloed QA roles, on the other hand, tend to slow down the release process because they’re incentivized to catch as many bugs as possible, whether the bugs are meaningful or not.
Therefore, product builders should own responsibility for quality.
A lack of forethought and planning inevitably leads to gaps in test coverage.
When defining a new feature or sprint, determine the tests you’ll need to add or update to account for product updates. Don’t consider a feature specification complete until test coverage requirements have been defined.
Being responsible for quality assurance implies planning, creating, and maintaining tests. To make these things possible for everyone on the product team, you need testing tools that anyone can use — tools that can be adopted quickly and easily without any special skills or training.
When the right testing tools, any of the other roles in the organization dependent on product quality — including founders, support staff, marketers, and more — can access, interpret, and update the test suite, regardless of their technical skills. Quality becomes more of a team sport.
Product managers and developers can collaborate on test creation, but primarily developers should kick off and maintain automated tests.
Developers run deployments, so they’re the ones who should trigger testing at the appropriate quality gates.
Developers also have the most context around the changes being shipped and are the best-situated to unblock the release process. So, when automated tests fail, developers should fix the underlying issue(s), just as they do with unit tests.
This approach requires the test automation tooling to be highly usable by any developer, ideally without needing to learn a new language or framework.
Software development teams tend to prioritize hitting deadlines over practices that promote product quality. Amidst the urgency to ship code, testing discipline tends to suffer. The automated, systematized workflows of a CI/CD pipeline provide both speed of execution and checkpoints for test creation, execution, and maintenance.
Plus, CI/CD is designed for frequent delivery of small releases that allow teams to quickly uncover and fix issues with the least amount of work.
Therefore, create, run, and maintain tests within a CI/CD pipeline.
A lack of test coverage for important features is how critical bugs escape into production.
Any tests specified during planning should be created before merging the corresponding feature.
A strong CI/CD pipeline includes checkpoints for code reviews.
The policies that govern code reviews and the decisions to merge new code should include considerations for test coverage. Specifically, before merging can proceed, require that any new tests – as defined during the planning process – have been implemented and that code on the feature branch passes the updated test suite.
Ideally, run your test suite within CI at quality gates triggered by a pull or merge request.
At a minimum, to prevent issues from hitting production, set your CI/CD pipeline to run automated end-to-end testing as a part of every release. Any test failures should block the corresponding release.
The same person who enforces policies and drives best practices in your development and deployment workflows should do the same for your QA workflows.
Test suites that take too long to run or are too difficult to maintain can become bottlenecks in the release process. When testing is a bottleneck, product teams are more likely to skip it so they can keep shipping code – which puts quality at risk.
Therefore, be judicious about adding coverage and continually prune low-value tests from your suite to keep it fast and maintainable.
After a blizzard, snow plows clear the most-trafficked streets first because they affect the most people. Later, those plows might clear some side streets while ignoring other streets altogether.
Applying the Snowplow Strategy to your testing means focusing on the user flows that are most important to your users and to your business. Ignore the flows you wouldn’t immediately fix if they broke.
Good testing is like a good product roadmap: the ‘nice-to-haves’ go in the backlog.
It’s tempting to test as many things as possible in the name of quality assurance. But not all user flows in your application are equally important, and test coverage is expensive in maintenance costs. So if you’re already struggling to keep up with test maintenance, reconsider adding a new test to your suite until you’ve retired an old one.
For each test you create, cover a user flow that is as finite as possible. Short tests finish quickly, are easier to debug, and are easier to maintain.
When left unaddressed, broken tests can prompt a vicious cycle: they erode trust in the test suite, which leads to less investment in the test suite, which leads to more broken tests. Ultimately, the test suite's reliability suffers, and so does quality.
When a broken test is detected, promptly evaluate it using the guidelines in this section to determine whether it should be fixed or removed. Code review policies should require that the test suite be free of broken tests before code can be merged.
The test environment is the most overlooked aspect of good testing. Thoughtful configuration of test environments and seeding state are critical to creating a system designed for quality.
Inconsistency among test environments adds unnecessary complexity to debugging any issues revealed by testing.
Therefore, use automation both to create and deploy to test environments as part of your CI/CD pipeline. Using automation to standardize test environments makes issues easier to reproduce, debug, and write tests for.
Have staging, QA, and production environments that reflect the stages in your development and release process. It should be clear what version of code is running on each environment so when you find a bug, you know who should fix it.
Providing every feature branch with its own environment provides clarity around ownership of any issues and allows different teams to work independently without creating collisions that slow things down.
Giving each feature branch an environment that developers can use for end-to-end testing also supports shift left practices, which help the team catch new issues before code becomes too complex to easily debug.
Aim to set up test data in your test environment that can shorten your tests, thereby reducing their complexity and maintenance costs.
For example, if you want to confirm the “past orders” page on an e-commerce site is working as intended: instead of adding steps to the test to create backdated orders, automatically add sample orders to your test environment upon deployment.
To avoid broken tests, reset the state of test data before each execution of your test suite.
Running tests concurrently shortens the time it takes to get results. But when tests share test accounts and states, running them concurrently can create conflicts and subsequent false-positive test failures that waste the team’s time and energy.
Wherever possible, seed unique test data and account profiles to each test so all of your tests can run concurrently without colliding.
Testing should represent the customer experience as much as possible. This means using the right kind of automation and recognizing when automation isn’t a good fit.
Many approaches to test automation evaluate what the computer sees in the DOM, not what your end users will see in the user interface (UI). To validate the actual user experience, use an automation approach that interacts with the UI.
Automation isn’t a fit for every use case. It’s a great fit for rote, repetitive tests that don’t require subjective evaluation and for features not subject to significant change.
Manual testing is a better fit for new features still in flux, particularly complex use cases, situations that benefit from human judgment, and high-risk releases where manual testing can augment automation.