Whether you’re formulating your startup’s first QA strategy or you’ve realized your existing strategy is undermining your otherwise-agile methodologies, this post is for you.

In this piece, we focus on the practical frameworks and practices that’ll help you lay a strong foundation for quality assurance in your org. Among other things, you’ll learn:

  • An alternative to hiring a QA team
  • Where to focus your efforts for the best returns
  • When to use test automation vs. manual testing

Before we get into it: any strategy needs support to succeed. This guide assumes you have buy-in from key stakeholders – your software and management teams – that product quality is key to protecting the interests of your business, and that quality can’t improve without better QA processes.

Nail down your (initial) QA goal

To figure out what success looks like for your quality assurance initiative, it’s often useful to start by asking: why, specifically, did we decide to develop a quality assurance strategy? 

Usually there’s some sort of precipitating event, like an embarrassing bug showing up during a sales demo, or a bunch of customers reporting an issue affecting an important workflow in your app. 

Instead of giving your QA strategy a generic goal (“improve product quality” or “make our product high-quality”) that’s difficult to quantify, narrow it down to something that ties directly to the pain your team is feeling at this moment. Select a goal that’s objectively measurable so you can track the effectiveness of your efforts. In many cases, this approach aligns with a goal like: Within [time period], eliminate high-priority bugs reported by customers.  

Why “high-priority bugs” and not just “bugs”? Because bugs are an inevitable part of building software, and not all bugs are equally important. The effort required to try to eliminate all bugs isn’t compatible with shipping fast or frequently, so modern software teams have to prioritize.

Regardless, don’t stress about defining the perfect initial goal: just like any other goal, your QA goal can change as you learn and become more effective.

Use a CI/CD pipeline

Using a CI/CD pipeline is one of the best DevOps practices you can adopt to improve and maintain software quality.

  • It systematizes good quality assurance practices by automating quality checks at defined checkpoints in the software development process.
  • Checkpoints early in the development lifecycle catch issues when they’re the least-expensive to resolve
  • It’s designed to deliver code frequently and quickly – smaller batches of code are easier to debug than larger, more-complex ones. 

Assign (don’t hire) QA roles

There are four major, ongoing tasks in the software testing workflow

  • Writing test cases
  • Running tests
  • Evaluating test results
  • Updating (or “maintaining”) tests

It used to be the case that technical skills would dictate who got assigned these tasks. For example, only software engineers familiar with frameworks like Selenium or Cypress who could “speak” relevant programming languages could write or maintain automated test cases. 

These testing tools forced companies to (a) hire specialized QA engineers to handle their automated test suites, and/or (b) have their existing software developers handle these testing duties, which meant less developer time dedicated to shipping code. It also meant other roles like product managers couldn’t use the tools that helped protect product quality – even though they ultimately shared the accountability for quality. 

With no-code test automation, anyone on the software team – including developers, product managers, designers, and project managers – can handle any of the responsibilities in the software testing workflow.

Rainforest is the only test automation platform that takes a no-code approach throughout the entire testing workflow. Seen here: creating a test in Rainforest’s Visual Editor.

For this and other reasons, you don’t need to hire QA specialists to implement a QA strategy. In fact, we think most startups should avoid hiring QA roles to handle testing activities – it’s an unnecessary expense and can lead to perverse incentives that‌ reduce quality and slow down releases.

Paolo Padiernos, Product @ Goodwater Capital

Delegating to outsourced testing teams simply compounds these issues – especially the slowdowns – because outsourced QA testers will never know the ins-and-outs of your product the way your team does. 

Instead, you can assign testing workflow responsibilities to roles you likely already have on staff. Consider who’s in the best position to own each task. For example, here’s how we handle things at Rainforest, where we haven’t hired any dedicated quality assurance testers:

  • Writing test cases: During feature planning, the product squad – a PM, software developer(s), and designer(s) – defines the test plan, including the end-to-end test coverage needed for the new feature they’re working on. For smaller features, the PM or a developer writes the tests. For larger features, everyone pitches in. 
  • Running tests: In our CI/CD pipeline, the regression test suite kicks off automatically as part of the release process, so the development team owns test execution (Though any team member can run tests, ad hoc, from the Rainforest UI to perform a sanity-check.) We also run scheduled tests against prod every day in case one of the 3rd-party services or APIs we use fails. 
  • Evaluating test results and maintaining tests: Developers have the most context around exactly what’s being shipped, which means they’re in the best position to debug test failures and to unblock releases by bringing tests into a passing state.

Know when to use automation, and when not to

When it comes to manual vs. automated testing, you should use test automation as much as possible, because it’s faster and cheaper. But that’s not to say you can use automation for everything in your testing process – manual testing is still a better option for some scenarios.

Here’s a rule of thumb for your testing strategy: use automation for repeated testing of stable features. Regression testing – which is run upon every release to confirm new changes haven’t broken existing functionality – is the perfect use case for automation.   

On the other hand, if you’re developing a new feature that’s still evolving, it typically makes more sense to spend your time testing manually than to create automated tests you’d have to update to reflect every feature change. (Pro tip: don’t underestimate the costs of test maintenance.) 

For obvious reasons, manual testing is also a better fit for scenarios that require human judgment and ingenuity, like exploratory testing.

Configure your QA environment 

A test is only as good as its environment. Not putting enough investment into your QA environment can erode all the speed and quality benefits you thought automated testing would give you. 

In short, your software testing environment needs to be production-like. “Prod-like” because you want the confidence that new code will work as expected in the environment where customers will ultimately interact with it, so it needs to perform like production. 

Configuring your QA environment involves two main components: setting up the technology (including hardware, software, and other infrastructure like networking) and seeding test data. 

The QA environment

Creating a QA testing environment that closely resembles prod is easier said than done. (Forget creating a perfect duplicate of prod in a testing environment, which is rarely practical, or even possible.) There’s always going to be a tradeoff between the fidelity of the environment’s resemblance to prod and the costs of creating that resemblance. You’re going to have to figure out the right balance of fidelity vs. cost for your team. 

The most frequent issue we see is that teams underinvest in their testing environments to the point where automated tests consistently fail – and require time-consuming investigation – because the environments are so slow and unstable. 

For example, imagine an automated test is looking for a particular element in your app’s UI, but the element doesn’t appear within the expected timeframe because the app is loading so slowly. That’s an avoidable test failure if you invest enough to get your QA environment to a “good enough” state of performance and stability. 

Test data seeding

The goal of seeding prod-like test data in your QA environment’s database is to create states in which tests complete as fast as possible and with the fewest possible steps. (Among other benefits we’ll address in the next section, having fewer steps means fewer potential points of false-positive test failures.)

For example, if you run a SaaS app like we do, there are lots of things you’ll want to test that require being logged into a (test) user account. You don’t want to have to include steps in every one of your tests that involve creating and/or logging into an account – each of those tests will take longer to run and any issue with the account-creation flow in your app will break all your tests. It’s a more efficient approach to write (shorter) test cases for environment states in which realistic fake-user accounts have already been created and logged-in.

When determining what test data to seed, consider the core roles and functions within your application. 

  • Roles: Different roles (e.g., user, admin, superadmin) have different core workflows, so you’ll want to account for each of those flows.
  • Functions: Do you have an e-commerce app? Then you’ll want to create states that make it easy to test things like adding items to a cart and checking out. 

Reset your QA environment’s database to a “clean” state each time before you run your test suite. This is a task ideal for automation in your CI/CD process, but if you don’t use CI/CD, you can use a webhook to initiate the reset. 

Don’t test too much(!)

There are costs to doing too much, too quickly. In fact, you can save yourself a lot of grief by remembering that in many areas of software testing, less is more.

Types of testing

If you’re new to software quality assurance practices, looking at a list of types of QA tests can feel overwhelming. 

The good news is: you don’t have to worry about all these types of tests during your initial foray into QA – your test strategy can include just three types of tests to get you to sufficient test coverage and capture the vast majority of important bugs: unit tests, end-to-end (E2E) tests, and exploratory tests.  

Unit tests

An automated unit test evaluates a specific, narrow function in the code. Unit tests are incredibly fast and cheap to run, so there’s mostly upside to running them early and often in the development process. They’re very useful for catching issues soon after they’re introduced when they’re the least expensive to fix. 

End-to-end tests

E2E tests evaluate end-user workflows from one end to the other. “Workflows” include finite tasks like: log into the app, add items to the shopping cart, and check out with a credit card. 

While automated E2E tests are fast and inexpensive relative to manual testing, they’re not as fast and cheap as unit tests, so they need to be run more strategically. (At a minimum: right before releasing to production.)

But whereas unit tests look at each function in a vacuum, E2E tests confirm that all those functions work well together. Since E2E tests encompass all the systems and functionalities that make user workflows possible, they technically cover API testing, integration testing, and functional testing, so you get three for the price of one. 

If you had to choose just one type of testing to do, we’d vote for E2E testing, since it literally tests the important flows in your app that need to work for your end users and customers.

Exploratory tests

E2E tests are literally “scripted” – they follow the same set of steps, every time. But plenty of bugs only surface when users go off-script. That’s where the unscripted nature of exploratory testing comes in. Exploratory testers manually interact with an app in creative and unexpected ways to find non-obvious issues. 

Length of tests

Keep each of your E2E tests as short as possible and make the scope of each one as limited as possible

The latter part of that suggestion means you shouldn’t try to do more than necessary with any single test. For example, “log in,” “create a profile,” and “add friends” each represent a good scope for a test, whereas a test like “log in then create a profile then add friends” is not a good scope.

Following this guidance means you’ll save time and money and be more effective:

  • Each of your tests — and your test suite — will run faster. Being able to run many tests in parallel is one of the selling points of test automation. When all of your tests run in parallel, completing a run of your entire test suite takes as long as your single, longest test takes to run. So keeping each of your tests as short as possible means your entire test suite can execute more quickly.
  • Your tests will be easier to organize. You’ll care a lot about test management as your test suite scales up.
  • Your failed tests will be easier to debug, because each test will have fewer potential points of failure.
  • Each round of testing will be more precise and informative. Building on the example of a test I used above with too broad of a scope: if the test fails upon login, you won’t know if create a profile or add friends works until you resolve the login issue. But if you create tests with more-finite scopes, it’ll be easier to isolate problem areas in your app more quickly.

Test coverage

When you start building out your suite of automated tests, start small. You’ll learn a lot in your early efforts about the best way to test certain scenarios and about the quirks of your testing environment. Better to learn these things before you create a bunch of tests you’d end up having to update.

How small is “small”? Your first milestone should be to create a smoke suite – which can be as small as ~five tests – that covers the most important workflows in your app. Once your smoke suite and testing process are stable, expand to a regression suite. Eventually, you should get to the point where you should have test coverage for every (important) release. 

To figure out which user workflows are the most important, you can ask yourself:

  • Would we consider it a disaster if this workflow broke?
  • Would it prevent users from paying the company money if this workflow broke? 
  • Would we fix this workflow right away if it broke?

Once your team has covered the most important user workflows in your app with E2E tests, the value of adding more tests quickly hits diminishing returns. 

That’s because – as we’ve already established – not all bugs are important, and bugs in unimportant workflows are likely to be unimportant bugs. If additional testing was “free,” that’d be less of an issue, but every additional test comes with maintenance costs, which are consistently underestimated by teams new to test automation.  

So, your goal isn’t to get to “100%” test coverage. Don’t create more tests than you can afford to maintain and only test workflows you’d fix right away if they broke.

Learn more about the Snowplow Strategy, our approach to prioritizing test coverage.

Define your policies

You won’t create a culture of quality – in which everyone on the team feels accountable for quality and consistently takes the appropriate steps to deliver it – overnight.

In the meantime (and in perpetuity), defining and enforcing policies that reinforce good habits is your best approach. After all, success is the product of daily habits. It helps to have an executive sponsor in engineering and/or product leadership who’s bought-in to your policies and will enforce them. 

For starters, here are some of the policies we’ve adopted and strongly endorse for other teams:

  • Code reviews must include a review of test coverage. This implies another policy that test coverage must be added before merging.
  • Run E2E tests before releasing to production. This prevents bugs from escaping to prod where users will find them.
  • Failed E2E tests must block the release pipeline. It can be tempting to ignore some test failures in favor of shipping, but this behavior prompts a vicious cycle where the test suite becomes outdated, so team members lose confidence in it. The team subsequently invests less effort in the suite, putting it even farther out-of-date. So, diagnose and resolve test failures before shipping. (If a failed test indicates an issue that isn’t worth resolving right away, it’s probably not worth having a test for that functionality.)

Measure your results

Other than the metric you defined to measure your main goal, consider metrics that optimize for the behaviors you want to reinforce in your quality assurance processes.

For example, measuring time-to-test – the time it takes for your test suite to run – focuses your team on finding ways to keep the release process moving quickly, like keeping each of your tests as short as possible. Use your policies to counteract any unhealthy shortcuts, like skipping important tests during release.  

Time-to-fix, or the time required to fix an issue once it’s been identified by a failed test, reminds everyone that improving and maintaining quality involves promptly addressing problems when they’re found. 

Learn more

Of course, there’s more nuance to each of the practices and guidelines we’ve set forth in this post – so much of developing an effective QA process is easier said than done.

Our experienced Quality Advocates can help you with best practices and implementation decisions in any area of your QA strategy, so talk to us about setting up a Rainforest plan that fits your needs.