Over a decade of helping startups improve quality, we’ve arrived at a set of QA testing best practices that work for teams shipping fast and frequently. These various best practices neatly fit into just five organizing principles.

In short, these principles are:

  1. Product teams own quality — not siloed QA
  2. Effective testing requires CI/CD
  3. For test coverage, less is more
  4. A test is only as useful as its environment
  5. Test like a human (even with automation)

Together, these principles represent The Rainforest Method. We’ve built our QA solution to make it easy for you to put The Method into practice.

Throughout this piece, assuming “testing” refers specifically to automated testing, unless noted otherwise.

1. Product teams own quality – not siloed QA

The people who build the product — particularly product managers and developers — are incentivized to ship and have the context to make informed tradeoffs between speed and quality. Siloed QA roles, on the other hand, tend to slow down the release process because they’re incentivized to catch as many bugs as possible, whether the bugs are meaningful or not.

Therefore, product builders should own responsibility for quality.

This isn’t to say you shouldn’t use QA roles — just that you should be using those roles for the right things. Working in close partnership with product builders, they can level-up any QA initiative. They can add a lot of value in areas like defining processes, conducting exploratory testing, reviewing automated test failures (not all test failures are bugs — many simply reflect tests that need to be updated), and maintaining automated tests.

As long as your product builders are responsible for defining what gets tested, for executing tests at the appropriate times in your release pipeline, and for triaging bugs, you’re on the right track.

Build test planning into product planning

A lack of forethought and planning inevitably leads to gaps in test coverage.

When defining a new feature or sprint, determine the tests you’ll need to add or update to account for product updates. Don’t consider a feature specification complete until test coverage requirements have been defined.

Adopt accessible testing tools

Being responsible for quality assurance implies defining test coverage (i.e., defining the features and flows that should be tested) and then confirming the test coverage has been implemented correctly, regardless of who implements it. To make these things possible for everyone on the product team, you need testing tools that anyone can use — tools that can be adopted quickly and easily without any special skills or training.

When the right testing tools, any of the other roles in the organization dependent on product quality — including founders, support staff, marketers, and more — can access, interpret, and update the test suite, regardless of their technical skills. Quality becomes more of a team sport.

Developers execute and maintain automated tests

Product managers and developers can collaborate on defining test coverage, but primarily developers should kick off automated tests.

Developers run deployments, so they’re the ones who should trigger testing at the appropriate quality gates.

Developers also have the most context around the changes being shipped and are the best-situated to unblock the release process. So, when automated tests fail, developers should fix the underlying issue(s), just as they do with unit tests. 

Of course, many test automation frameworks are notoriously painful to use. This approach requires the test automation tooling to be highly usable by any developer, ideally without needing to learn a new language or framework.

Using generative AI, automated tests created in Rainforest can automatically update themselves to reflect intended changes to your app.

Which means your developers spend less time on test maintenance and more time shipping code.

2. Effective testing requires CI/CD

Software development teams tend to prioritize hitting deadlines over practices that promote product quality. Amidst the urgency to ship code, testing discipline tends to suffer. The automated, systematized workflows of a CI/CD pipeline provide both speed of execution and checkpoints for test creation, execution, and maintenance.

Plus, CI/CD is designed for frequent delivery of small releases that allow teams to quickly uncover and fix issues with the least amount of work.

Therefore, create, run, and maintain tests within a CI/CD pipeline.

Add test coverage for new features before merging

A lack of test coverage for important features is how critical bugs escape into production.

Any tests specified during planning should be created before merging the corresponding feature.

Use code reviews to verify test coverage

A strong CI/CD pipeline includes checkpoints for code reviews.

The policies that govern code reviews and the decisions to merge new code should include considerations for test coverage. Specifically, before merging can proceed, require that any new tests – as defined during the planning process – have been implemented and that code on the feature branch passes the updated test suite.

Run end-to-end tests automatically in the release process

Ideally, run your test suite within CI at quality gates triggered by a pull or merge request. 

At a minimum, to prevent issues from hitting production, set your CI/CD pipeline to run automated end-to-end testing as a part of every release. Any test failures should block the corresponding release.

Enforce QA policies with a Quality Ops owner

The same person who enforces policies and drives best practices in your development and deployment workflows should do the same for your QA workflows.

3. For test coverage, less is more

Automated test suites that take too long to run or are too difficult to maintain can become bottlenecks in the release process. When testing is a bottleneck, product teams are more likely to skip it so they can keep shipping code – which puts quality at risk.

Therefore, be judicious about adding coverage and continually prune low-value tests from your suite to keep it fast and maintainable.

Apply the Snowplow Strategy

After a blizzard, snow plows clear the most-trafficked streets first because they affect the most people. Later, those plows might clear some side streets while ignoring other streets altogether.

Applying the Snowplow Strategy to your testing means focusing on the user flows that are most important to your users and to your business. Ignore the flows you wouldn’t immediately fix if they broke.

Good testing is like a good product roadmap: the ‘nice-to-haves’ go in the backlog.

Only create what you can maintain

It’s tempting to test as many things as possible in the name of quality assurance. But not all user flows in your application are equally important, and test coverage is expensive in maintenance costs. So if you’re already struggling to keep up with test maintenance, reconsider adding a new test to your suite until you’ve retired an old one.

Keep tests short

For each test you create, cover a user flow that is as finite as possible. Short tests finish quickly, are easier to debug, and are easier to maintain.

Fix or remove broken tests right away 

When left unaddressed, broken tests can prompt a vicious cycle: they erode trust in the test suite, which leads to less investment in the test suite, which leads to more broken tests. Ultimately, the test suite’s reliability suffers, and so does quality.

When a broken test is detected, promptly evaluate it using the guidelines in this section to determine whether it should be fixed or removed. Code review policies should require that the test suite be free of broken tests before code can be merged.

4. A test is only as useful as its environment

The test environment is the most overlooked aspect of good testing. Thoughtful configuration of test environments and seeding state are critical to creating a system designed for quality.

Set up test environments for consistency

Inconsistency among test environments adds unnecessary complexity to debugging any issues revealed by testing.

Therefore, use automation both to create and deploy to test environments as part of your CI/CD pipeline. Using automation to standardize test environments makes issues easier to reproduce, debug, and write tests for.

Make it clear what version of the code is being tested

Have staging, QA, and production environments that reflect the stages in your development and release process. It should be clear what version of code is running on each environment so when you find a bug, you know who should fix it.

Deploy each feature branch to its own environment

Providing every feature branch with its own environment provides clarity around ownership of any issues and allows different teams to work independently without creating collisions that slow things down. 

Giving each feature branch an environment that developers can use for end-to-end testing also supports shift left practices, which help the team catch new issues before code becomes too complex to easily debug.

Reset and deploy test data with your test environment

Aim to set up test data in your test environment that can shorten your tests, thereby reducing their complexity and maintenance costs.

For example, if you want to confirm the “past orders” page on an e-commerce site is working as intended: instead of adding steps to the test to create backdated orders, automatically add sample orders to your test environment upon deployment.

To avoid broken tests, reset the state of test data before each execution of your test suite.

Avoid shared state

Running tests concurrently shortens the time it takes to get results. But when tests share test accounts and states, running them concurrently can create conflicts and subsequent false-positive test failures that waste the team’s time and energy.

Wherever possible, seed unique test data and account profiles to each test so all of your tests can run concurrently without colliding.

5. Test like a human (even with automation)

Testing should represent the customer experience as much as possible. This means using the right kind of automation and recognizing when automation isn’t a good fit.

Use the right kind of automation

Many approaches to test automation evaluate what the computer sees in the DOM (behind-the-scenes browser code), not what your end users will see in the user interface (UI). To validate the actual user experience, use an automation approach that interacts with the UI.

Know when to use test automation and when not to

Automation isn’t a fit for every use case. It’s a great fit for rote, repetitive tests that don’t require subjective evaluation and for features not subject to significant change.

Manual testing is a better fit for new features still in flux, particularly complex use cases, situations that benefit from human judgment, and high-risk releases where manual testing can augment automation.