The Rainforest Method

At Rainforest, we believe QA doesn't have to suck.

Over a decade of helping startups improve quality, we’ve arrived at a set of principles that work for teams shipping fast and frequently.

We’ve built Rainforest to make it easy for you to put these principles into practice.

1. Product teams own quality – not siloed QA

The people who build the product – developers, product managers, and designers – are the ones ultimately responsible for the quality of the product. To make quality systemic, you need tools that are accessible to everyone on these teams. Product builders can then own QA, including planning, creating, and maintaining tests.

How do I put this into practice?

Build test planning into product planning

A lack of forethought and planning inevitably leads to gaps in test coverage.

When defining a new feature or sprint, determine the tests you’ll need to add or update to account for product updates. Don’t consider a feature specification complete until test coverage requirements have been defined.

Developers execute and maintain automated tests

Product managers and developers can collaborate on test creation, but primarily developers should kick off and maintain automated tests.

Developers run deployments, so they’re the ones who should trigger testing at the appropriate quality gates.

Developers also have the most context around the changes being shipped and are the best-situated to unblock the release process. So, when automated tests fail, developers should fix the underlying issue(s), just as they do with unit tests. 

This approach requires the test automation tooling to be highly usable by any developer, ideally without needing to learn a new language or framework.  

Make quality a team effort

Many roles in the organization – including product builders, founders, support staff, marketers, and more – are dependent on product quality. They should all be able to access, interpret, and update the test suite, regardless of their technical skills.

2. Effective testing requires CI/CD

Software development teams tend to prioritize hitting deadlines over practices that promote product quality. Amidst the urgency to ship code, testing discipline tends to suffer. The automated, systematized workflows of a CI/CD pipeline provide both speed of execution and checkpoints for test creation, execution, and maintenance.

Plus, CI/CD is designed for frequent delivery of small releases that allow teams to quickly uncover and fix issues with the least amount of work.

Therefore, create, run, and maintain tests within a CI/CD pipeline.

How do I put this into practice?

Add test coverage for new features before merging

A lack of test coverage for important features is how critical bugs escape into production.

Any tests specified during planning should be created before merging the corresponding feature.

Use code reviews to verify test coverage

A strong CI/CD pipeline includes checkpoints for code reviews.

The policies that govern code reviews and the decisions to merge new code should include considerations for test coverage. Specifically, before merging can proceed, require that any new tests – as defined during the planning process – have been implemented and that code on the feature branch passes the updated test suite.

Run end-to-end tests automatically in the release process

Ideally, run your test suite within CI at quality gates triggered by a pull or merge request. 

At a minimum, to prevent issues from hitting production, set your CI/CD pipeline to run automated end-to-end testing as a part of every release. Any test failures should block the corresponding release.

Enforce QA policies with a Quality Ops owner

The same person who enforces policies and drives best practices in your development and deployment workflows should do the same for your QA workflows.

3. For test coverage, less is more

Test suites that take too long to run or are too difficult to maintain can become bottlenecks in the release process. When testing is a bottleneck, product teams are more likely to skip it so they can keep shipping code – which puts quality at risk.

Therefore, be judicious about adding coverage and continually prune low-value tests from your suite to keep it fast and maintainable.

How do I put this into practice?

Apply the Snowplow Strategy

After a blizzard, snow plows clear the most-trafficked streets first because they affect the most people. Later, those plows might clear some side streets while ignoring other streets altogether.

Applying the Snowplow Strategy to your testing means focusing on the user flows that are most important to your users and to your business. Ignore the flows you wouldn’t immediately fix if they broke.

Good testing is like a good product roadmap: the ‘nice-to-haves’ go in the backlog.

Only create what you can maintain

It’s tempting to test as many things as possible in the name of quality assurance. But if you’re already struggling to keep up with test maintenance, seriously reconsider adding a new test to your suite until you’ve retired an old one.

Keep tests short

For each test you create, cover a user flow that is as finite as possible. Short tests finish quickly, are easier to debug, and are easier to maintain.

Fix or remove broken tests right away 

When left unaddressed, broken tests can prompt a vicious cycle: they erode trust in the test suite, which leads to less investment in the test suite, which leads to more broken tests.

When a broken test is detected, promptly evaluate it to determine whether it should be fixed or removed. Code review policies should require that the test suite be free of broken tests before code can be merged.

4. A test is only as useful as its environment

The test environment is the most overlooked aspect of good testing. Thoughtful configuration of test environments and seeding state are critical to creating a system designed for quality.

How do I put this into practice?

Set up test environments for consistency

Inconsistency among test environments adds unnecessary complexity to debugging any issues revealed by testing.

Therefore, use automation both to create and deploy to test environments as part of your CI/CD pipeline. Using automation to standardize test environments makes issues easier to reproduce, debug, and write tests for.

Make it clear what version of the code is being tested

Have staging, QA, and production environments that reflect the stages in your development and release process. It should be clear what version of code is running on each environment so when you find a bug, you know who should fix it.

Deploy each feature branch to its own environment

Providing every feature branch with its own environment provides clarity around ownership of any issues and allows different teams to work independently without creating collisions that slow things down. 

Giving each feature branch an environment that developers can use for end-to-end testing also supports shift left practices, which help the team catch new issues before code becomes too complex to easily debug.

Reset and deploy test data with your test environment

Aim to set up test data in your test environment that can shorten your tests, thereby reducing their complexity and maintenance costs.

For example, if you want to confirm the “past orders” page on an e-commerce site is working as intended: instead of adding steps to the test to create backdated orders, automatically add sample orders to your test environment upon deployment.

To avoid broken tests, reset the state of test data before each execution of your test suite.

Avoid shared state

Running tests concurrently shortens the time it takes to get results. But when tests share test accounts and states, running them concurrently can create conflicts and subsequent false-positive test failures that waste the team’s time and energy.

Wherever possible, seed unique test data and account profiles to each test so all of your tests can run concurrently without colliding.

5. Test like a human (even with automation)

Testing should represent the customer experience as much as possible. This means using the right kind of automation and recognizing when automation isn’t a good fit.

How do I put this into practice?

Use the right kind of automation

Many approaches to test automation evaluate what the computer sees in the DOM, not what your end users will see in the user interface (UI). To validate the actual user experience, use an automation approach that interacts with the UI.

Know when to use test automation and when not to

Automation isn’t a fit for every use case. It’s a great fit for rote, repetitive tests that don’t require subjective evaluation and for features not subject to significant change.

Manual testing is a better fit for new features still in flux, particularly complex use cases, situations that benefit from human judgment, and high-risk releases where manual testing can augment automation.