Over a decade of helping startups improve quality, we’ve arrived at a set of QA testing best practices that work for teams shipping fast and frequently. They fit neatly into five organizing principles:

  1. Product teams own software quality — not a siloed QA team
  2. Effective testing requires CI/CD
  3. For test coverage, less is more
  4. A test is only as useful as its environment
  5. Always test like a human (even with automation)

Together, these principles represent The Rainforest Method. We’ve built our QA solution to make it easy for you to put this software testing method into practice and develop a quality product.

A few notes:

  • This post covers functional testing — the kind that makes sure your app’s important flows behave as expected for your users. We won’t be covering non-functional types of testing like security testing, performance testing, or usability testing.
  • Throughout the post, assume we’re referring to black box testing (where the tester doesn’t have access to the underlying code), not white box testing.
  • Also assume that “testing” refers specifically to automated testing, unless otherwise noted.

1. Product teams own software quality — not a siloed QA team

The people who build the product — particularly product managers and developers — are incentivized to ship and have the context to make informed tradeoffs between speed and quality.

Siloed QA team members, on the other hand, tend to slow down the release process because they’re incentivized to catch as many bugs as possible, whether the bugs are meaningful for quality control or not.

Therefore, software product builders should hold ultimate responsibility for quality.

This isn’t to say you shouldn’t use quality assurance roles as part of your testing team — just that you should apply those roles to the right testing activities. Working in close partnership with product builders (starting as soon as software requirements are being scoped out) they can level-up any QA process and contribute to high-quality software.

QA testers and QA engineers, depending on their specialties, can consult on your software testing process, help define test cases, conduct exploratory testing to supplement your regression testing, review automated test failures (not all test failures are bugs — many simply reflect tests that need to be updated), and maintain automated tests.

As long as your product builders are responsible for defining what functionality gets tested, for executing tests at the appropriate times in your devops pipeline, and for prioritizing and fixing bugs, you’re on the right track.

Build test planning into product planning

A lack of forethought and planning inevitably leads to gaps in test coverage.

When defining a new feature or sprint, determine the tests you’ll need to add or update to account for product updates. Don’t consider a feature specification complete until you’ve clearly defined the test cases you need to add to your suite.

Adopt accessible testing tools

Being responsible for quality assurance implies defining test coverage (i.e., identifying the important functionality, features, and flows that should be tested) and then confirming the coverage has been implemented correctly, regardless of who implements it.

To make this possible for everyone on the product team, you need software testing tools that anyone can use — tools that can be adopted quickly and easily without any special skills or training.

When you have the right testing tools, anyone else in the organization who depends on product quality — including founders, support staff, marketers, and more — can access, interpret, and update the test suite, regardless of their technical skills. The QA process becomes more of a team sport.

Developers execute automated tests

Product managers and developers can collaborate on defining test coverage and acceptance criteria for your tests, but your software engineers and developers should be the primary ones kicking off automated tests. They run deployments, so they’re the ones who should trigger testing at the appropriate quality gates.

They also have the most context around the changes being shipped and are best situated to unblock the release process. So, when automated tests fail, developers have the context necessary to get an outdated test to a passing state (in the case of a false positive test result) or to fix the underlying functionality in your app, just as they do when unit tests or integration tests fail. 

Of course, many test automation solutions are notoriously painful to use. Ideally, your test automation tooling is highly usable by any developer or engineer, without their needing to learn a new language or framework. Otherwise, test maintenance can become a drag for your developers and for your release process.

In fact, some software product teams use QA engineers — working closely with developers — to review test failures and update tests so their developers can stay focused on shipping code and fixing legitimate bugs.

Using generative AI, Rainforest can automatically update automated tests to reflect intended changes to your app.

Which means your team spends less time on test maintenance and more time shipping code.

2. Effective testing requires CI/CD

Software development teams tend to prioritize hitting deadlines over testing practices that promote product quality. Amid the urgency to ship code, testing discipline tends to suffer.

The automated, systematized workflows of a continuous integration / continuous deployment (CI/CD) pipeline provide both velocity and checkpoints for test creation, execution, and maintenance.

Plus, CI/CD is designed for SDLCs featuring frequent delivery of small releases, allowing teams to quickly uncover and fix issues with the least amount of work.

Therefore, the best testing process involves creating, running, and maintaining tests within a CI/CD pipeline.

Add test coverage for new features before merging

A lack of test coverage for important functionality is how critical bugs escape into production.

Any tests specified during planning should be created before merging the corresponding feature.

Use code reviews to verify test coverage

A strong CI/CD pipeline includes checkpoints for code reviews.

The policies that govern code reviews and the decisions to merge new code should include considerations for test coverage. Specifically, before merging can proceed, require that any new tests — as defined during the planning process — have been implemented and that code on the feature branch passes the updated test suite.

Run end-to-end tests automatically in the release process

In the ideal continuous testing strategy, you run your test suite within CI at quality gates triggered by a pull or merge request. 

At a minimum, to prevent issues from hitting production, set your CI/CD pipeline to run automated end-to-end testing as a part of every release. Any test failures should block the corresponding release.

Enforce QA policies with a Quality Ops owner

The same stakeholder who enforces policies and drives best practices in your software development process and deployment workflows should do the same for your software QA workflows.

3. For test coverage, less is more

Automated test suites that take too long to run or are too difficult to maintain can become bottlenecks in the release process. When testing is a bottleneck, product teams are more likely to skip it so they can keep shipping code — which puts quality at risk.

Therefore, be judicious about adding coverage and continually prune low-value tests from your suite to keep it fast and maintainable.

Apply the Snowplow Strategy

After a blizzard, snow plows clear the most-trafficked streets first because they affect the most people. Later, those plows might clear some side streets while ignoring other streets altogether.

Applying the Snowplow Strategy to your software testing means focusing on the user flows that are most important to your end users and to your business. Ignore the flows you wouldn’t immediately fix if they broke. (It’s among the top QA best practices.)

This can also apply to compatibility testing: don’t bother testing on browsers or platforms that your users don’t actually use.

Good testing is like a good product roadmap: the ‘nice-to-haves’ go in the backlog.

Only create what you can maintain

It’s tempting to test as many things as possible in the name of quality assurance. But not all user flows in your application are equally important, and test coverage is expensive in terms of maintenance costs.

So if you’re already struggling to keep up with test maintenance, reconsider adding a new test to your suite until you’ve retired an old one.

Keep tests short

For each test case, cover a user flow that’s as finite as possible. Short tests finish quickly, are easier to debug, and are easier to maintain.

For modern, agile teams, speed of test execution can be a useful metric to track. If you find your test suite is taking longer to run than usual, it might be time to look for ways to make your tests shorter and more efficient.

Fix or remove broken tests right away 

When left unaddressed, broken tests can prompt a vicious cycle: they don’t just erode your quality standards, but also the team’s trust in the test suite. Which leads them to invest less in the suite, which leads to more broken tests. Ultimately, the test suite’s reliability suffers, and so does quality.

When a broken test is detected, promptly evaluate it using the guidelines in this section to determine whether it should be fixed or removed. Code review policies should require that the test suite be free of broken tests before code can be merged.

4. A test is only as useful as its environment

The test environment is the most overlooked aspect of good testing. Thoughtful configuration of test environments and seeding state are critical to creating a system designed for high quality.

Set up test environments for consistency

Inconsistency among test environments adds unnecessary complexity to debugging any issues revealed by testing.

Therefore, use automation both to create and to deploy to test environments as part of your CI/CD pipeline. Using automation to standardize test environments makes issues easier to reproduce, debug, and write tests for.

Make it clear what version of the code is being tested

Have staging, QA, and production environments that reflect the stages in your development and release process. It should be clear what version of code is running on each environment, so when someone files a bug report in Jira (or whatever ticket management tool you use), you know who should fix it.

Deploy each feature branch to its own environment

Providing every feature branch with its own environment provides clarity around ownership of any issues and allows different teams to work independently without creating collisions that slow things down. 

Giving each feature branch an environment that developers can use for end-to-end testing also supports shift left practices, which help the development team catch new issues before code becomes too complex to easily debug.

Reset and deploy test data with your test environment

For a measurable reduction in the complexity and maintenance costs of your tests, set up test data in your test environment that can shorten your tests.

For example, if you want to confirm the “past orders” page on an e-commerce site is working as intended: Instead of adding steps to the test to create backdated orders, automatically add sample orders to your test environment upon deployment.

To avoid broken tests, reset the state of test data before each execution of your test suite.

Avoid shared state

Running tests concurrently shortens the time it takes to get results. But when tests share test accounts and states, running them concurrently can create conflicts and subsequent false-positive test failures and incorrect validation that waste the team’s time and energy. (Few things are more disruptive to the software development lifecycle than engineers having to investigate false positives.)

Wherever possible, seed unique test data and account profiles to each test so all of your tests can run concurrently without colliding.

5. Test like a human (even with automation)

Testing should represent the customer experience as much as possible. This means using the right kind of automation and recognizing when automation isn’t a good fit.

Use the right kind of automation

Many approaches to test automation evaluate what the computer sees in the DOM (behind-the-scenes browser code), not what your end users will see in the user interface (UI). To validate the actual user experience, use an automation approach that interacts with the UI.

Know when to use test automation and when not to

Automation isn’t a fit for every use case and testing phase. It’s a great fit for rote, repetitive tests that don’t require subjective evaluation and for features not subject to significant change.

Manual testing is a better fit for new features still in flux, particularly complex use cases, situations that benefit from human judgment, and high-risk releases where manual testing can augment automation.