The Rainforest Method

1. Product builders, not QAs, own product quality

The people who build the product – developers, designers, and product managers – are the ones ultimately responsible for outcomes related to quality. 

Therefore, product builders own quality assurance, which includes test planning and updating the test suite.

How do I put this into practice?

Assign a Quality Architect

While product builders are ultimately responsible for product quality and the test suite, they don’t necessarily know best practices when it comes to testing. 

Assign a Quality Architect who will define the organization’s QA workflows and policies and will consult with product builders on testing practices. The Quality Architect should also have the inherent authority necessary to enforce QA policies.  

Build test planning into product planning

A lack of forethought and planning inevitably leads to gaps in test coverage.

When defining a new feature or sprint, determine the tests you’ll need to add or update to account for product updates. Furthermore, a feature specification isn’t considered complete until these test coverage requirements have been defined.

Developers kick off automated tests and perform test maintenance

Developers run the CI/CD pipeline, so they’re the ones who should kick off automated tests at the appropriate checkpoints.

Running automated tests as part of the release process reveals which tests have broken as a result of product changes in the release. Since the developers on the team have the most context around the changes (i.e., they’re the most familiar with the exact intended product behavior) and they’re the ones best-situated to unblock the release process, they should also update broken tests to a passing state.

This approach requires that the test automation tooling be highly usable by any developer, ideally without needing to learn a new language or complex framework.  

Treat quality like a team sport

Many roles in the organization–including product builders, support staff, marketers, and more–are impacted by product quality. Stakeholders in quality should be able to access, interpret, and update the test suite, regardless of their technical skills.

2. Quality suffers without systems

Product teams tend to prioritize hitting deadlines over the practices that promote product quality. Amidst the urgency to ship code, testing discipline tends to suffer.

Therefore, to protect product quality, testing practices should be systematized in the product development process using defined workflows, automation, and strong policies.

How do I put this into practice?

Use a CI/CD pipeline

This document describes a number of workflows and policies to apply to your quality assurance (QA) process, but systematizing QA fundamentally begins with using a CI/CD pipeline.

The automated, systematized processes of a CI/CD pipeline provide both speed of execution and checkpoints for test creation, execution, and maintenance. 

Plus, CI/CD is designed for frequent delivery of small releases that allow product teams to quickly uncover and fix issues with the least amount of work.  

Therefore, create, run, and maintain tests within a CI/CD pipeline.

Use code reviews to verify test coverage

A strong CI/CD pipeline includes checkpoints for code reviews. 

The policies that govern code reviews and the decisions to merge new code should include considerations for test coverage. Specifically, before merging can proceed, require that any new tests – as defined during the planning process – have been implemented and that code on the feature branch passes the updated test suite.

Run end-to-end tests automatically in the release process

As a final measure to prevent issues from hitting production, set your CI/CD pipeline to run automated end-to-end testing as a part of every release. Any test failures should block the corresponding release.

3. Test like a human

Customers are humans, not machines, so testing should represent the human experience as much as possible. This means using the right kind of automation and recognizing when automation isn’t a good fit.

How do I put this into practice?

Know when to use test automation and when not to 

Automation isn’t a fit for every use case. It’s a great fit for rote, repetitive tests that don’t require subjective evaluation. Manual testing is a better fit for new features still in flux, particularly complex use cases, situations that benefit from human judgment, and high-risk releases where manual testing can augment automation.

4. Testing requires speed

When any feature can be copied by competitors, speed of execution is a competitive advantage. 

In a fast-moving CI/CD pipeline, when testing is a bottleneck, it’s more likely to get skipped so product teams can keep shipping code. Adopt practices to keep your testing fast so it’ll be consistently carried out.

How do I put this into practice?

Automate what can be automated

Automation is the first step to speeding up testing, so look for opportunities to automate tests and testing processes within a CI/CD pipeline. 

Only test what’s important

Not all user flows in your application are equally important, and test coverage is expensive in maintenance costs. Therefore, beyond a certain amount of coverage, additional testing brings diminishing returns. Prioritize testing of the flows you’d immediately fix if they were broken, and don’t create more tests than you can maintain.

Shift left: write and run tests as soon as possible

The sooner you create and run tests to cover new code, the sooner you can catch new issues before code becomes too complex to easily debug. Therefore, create and run your tests as soon as reasonably possible during the software development life cycle. 

To support this goal, every feature branch should have its own environment that developers can use for end-to-end testing.

Keep tests short

For each test you create, cover a user flow that is as finite as possible. Short tests finish quickly, are easier to debug, and are easier to maintain.

5. Keep tests up-to-date

Unless you fix broken tests right away and promptly add test coverage for new features, the test suite's usefulness goes down and the chances of the product team abandoning it go up. Test suites are only valuable if they’re systematically kept up-to-date.

How do I put this into practice?

Fix or remove broken tests right away

Broken tests can prompt a vicious cycle: left unaddressed, they erode trust in the test suite, which leads to less investment in the test suite, which leads to more broken tests.

When a broken test is detected, promptly evaluate it to determine whether it should be fixed or removed. Code review policies should require that the test suite be free of broken tests before code can be merged.

Add test coverage for new features before merging 

A lack of test coverage for important features is how critical bugs escape into production.

Following the practices described in this document, the planning to add new features should also include planning for test coverage. Any tests specified during planning should be created before merging the corresponding feature and test coverage should be verified during code review for the feature.

6. Success relies on setup

A test is only as good as its test environment and test data. Thoughtful configuration of these is the foundation of an effective testing system.

How do I put this into practice?

Set up test environments for consistency

Inconsistency among test environments adds unnecessary complexity to debugging any issues revealed by testing.

Therefore, use automation both to create and deploy to test environments as part of your CI/CD pipeline. Using automation to standardize test environments makes issues easier to reproduce, debug, and write tests for. 

Further, when you use automation to create and configure all of your environments, it’s easier to see where the production environment differs from other environments.

Make it clear what version of the code is being tested 

Have staging, QA, and production environments that reflect the stages in your development and release process. It should be clear what version of code is running on each environment so when you find a bug, you know who should fix it. 

Also consider deploying each feature you’re working on to its own environment to provide clarity around ownership of any issues. This also allows different teams to work independently without creating collisions that would slow the process down. 

Reset and deploy test data with your test environment

Look for opportunities to set up test data in your test environment that would shorten your tests, thereby reducing their complexity and maintenance costs. 

For example, if you want to confirm the “past orders” page on an e-commerce site is working as intended: instead of adding steps to the test to create backdated orders, automatically add sample orders to your test environment upon deployment. 

To avoid broken tests, reset the state of test data before each execution of your test suite. 

Avoid test collisions

Running tests concurrently shortens the time it takes to get results. But when tests share test accounts and data, running them concurrently can create conflicts and subsequent false-positive test failures that waste the team’s time and energy.

Where necessary, provide unique test data and account profiles to each test so all of your tests can run concurrently without colliding. 

Use the right test for the job

Different types of tests excel at checking different aspects of the final product, so you can’t rely on just one type of testing for coverage. Design your testing process to apply the right type of test for the task at hand.