It happens at almost every early-stage startup: a major bug escapes into prod (or into an investor demo).
For teams without meaningful quality controls in place, the common reactions are dismay, embarrassment, and the instinct to hire someone who can own quality. In these cases, “quality” generally means a lack of bugs.
While the logic seems sound – there’s a quality issue, so we should hire an expert to handle quality – this instinct is ultimately misguided.
In fact, making a first QA hire can actively degrade one of the most important qualities of a scrappy startup: speed. We’ve seen this repeatedly in our ten years of working with startups to improve product quality.
In this post, we’ll explain why startups usually don’t need to (and often shouldn’t) hire a QA person to improve quality. Along the way, we’ll show you how you can improve QA practices and product quality without spending your startup’s limited budget on additional headcount.
Let’s examine the implications of the most common arguments for hiring a startup's first QA. We’re going to do that in the context of these assumptions:
First, given limited budgets and resources, startups would prefer to avoid hiring if it’s not necessary for improving operational performance. Especially when market conditions are difficult.
Second, in most cases, ideal operational performance for a startup means delivering code fast and frequently, with quality. Delivering code fast with quality implies a reliance on test automation, so most of the following references to testing will imply automated tests, unless otherwise indicated.
This is usually the main argument for making a first QA hire. It’s also the most fraught argument.
Startups often follow the superficial logic that the someone accountable for quality should be a person with “quality” in their title. But a lot can go wrong when you put the onus of quality on some one person or role instead of on your entire software team. Particularly when that someone has limited control over product development.
First, there are the perverse incentives.
When you put responsibility for quality on a single role (QA), it creates a situation where only QA is incentivized to care about quality. People in other roles tend to become somewhat less conscientious.
For example, a developer might think: I can focus on speed of development instead of quality because we’ve got someone else on the team to catch my bugs.
This incentive structure makes overall quality more likely to decline than improve. You might catch some bugs before they hit prod, but there’ll be more bugs to catch. More bugs means more time your team needs to spend debugging, but there’s never enough bandwidth to bash every bug, so tech debt piles up.
That’s already bad enough, but the situation gets worse thanks to another condition: QA people don’t control product decisions.
They don’t code (the product), they don’t design the UX or the UI, and they don’t prioritize features. The one thing directly under their control is test coverage. When you make someone directly accountable for quality and all they can control is how much testing is done, you incentivize them to test as much as possible to make sure no bugs slip through the cracks. Less conscientiousness on the rest of the team only increases the pressure to test extensively.
Does that sound good? Because it’s not.
Not all testing is valuable or even useful. Once you’ve got tests covering the critical, absolutely-can’t-break user flows in your app, additional testing hits diminishing returns, because testing isn’t free.
In addition to increasing the time it takes for your test suite to run, every additional test requires additional costs in maintenance. In fact, we find that the maintenance required to keep tests in sync with evolving product features is one of the most underestimated costs of test automation. (Which is why we say less is more when it comes to test coverage.)
Every time even a minor product change breaks tests in your suite, that’s time your QA person has to spend investigating the test failure and then updating the tests to get the suite to a passing state.
Running many more tests than necessary quickly becomes the biggest bottleneck in the release process.
Software development is a constant tradeoff between speed, features, and quality. It’s sensible to balance a focus on speed with quality control, but too much testing overcompensates at the expense of the company.
Developers in particular are incentivized to ship code as quickly as possible. If excessive testing by QA to try to catch every little (often noncritical) bug consistently creates bottlenecks in releases, developers naturally get frustrated. When startups effectively put QA in charge of the release process instead of developers, it can create a lot of resentment.
We’ve seen this situation play out in one of two (unfortunate) ways: it takes much longer to ship code, or developers get fed up with slowdowns and ignore QA. The latter path allows more bugs into prod, breaks more tests, and puts QA even farther behind, which only makes the tensions with the dev team worse. Either way, QA loses credibility in the organization and quality suffers.
A better model is to put accountability for quality in the hands of the people who ultimately control what gets shipped: product managers (PMs) and developers.
Having to create a QA strategy sounds pretty daunting when you’ve never had to plan one, before.
But creating a QA strategy is actually quite straightforward. In fact, after ten years of helping startups improve QA, we’ve managed to boil QA best practices down into just five principles.
In a nutshell, here’s what you need to know about QA strategy as an early-stage startup:
1. Make your product builders responsible for QA. As we’ve established, accountability for quality should rest with the people who decide what gets shipped – PMs and developers. Here’s what that can look like, in practice:
2. Systematize quality practices within your CI/CD pipeline. The checkpoints and automated nature of CI/CD make it ideal for ensuring consistency in your QA practices. Mainly: confirm e2e test coverage as part of code reviews, and run e2e tests automatically as part of the release process, where any failing tests block the corresponding release. Assign the same person who enforces development and deployment policies to enforce these QA policies, too.
3. Take a less-is-more approach to test coverage. As you learned in the previous section, trying to test all the things is too expensive in maintenance costs and time-to-release. Only add e2e test coverage for the most important user flows in your app – the things you’d fix right away if they broke. (We call this the Snowplow Strategy.)
4. Set your environments up for testing success. Once you’ve configured your test environments, seed them with test data and make it easy to reset them. Thoughtful design of your environments will make all the difference in the speed and effectiveness of your testing.
5. Know when to use automation and when not to. The speed and (low) cost of automation make it very compelling, but it’s not a good fit for every situation. Use it for rote, repetitive testing that doesn't require subjective evaluation, like regression testing.
The flexibility of manual testing is a better fit for features under active development that are constantly evolving. For high-risk releases, specifically consider exploratory testing (i.e., unscripted manual testing), which is designed to uncover bugs that happen off the happy path.
It’s true: an experienced QA professional can be particularly good at devising creative ways to break an app to uncover bugs. But this alone isn’t worth a full-time hire.
First, the only time you should be trying to break an app to find bugs (which implies exploratory manual testing) is when you’re preparing to release the stable version of a new feature. In subsequent releases, automated tests are sufficient for making sure the feature continues to operate as intended.
Second, if you don’t think your existing team’s dogfooding and exploratory testing of the app is sufficient (it often is), you can outsource exploratory testing much more affordably than you can hire someone full-time.
For example, as part of our Premium plan, Rainforest offers exploratory testing by a group of our experienced QA specialists.
When automated end-to-end (e2e) tests break due to feature changes, developers are in the best position to bring those tests up-to-date because they’re the most familiar with what gets shipped in each release.
But shipping code is one of the top priorities for developers at a startup. Anything that distracts from that priority is difficult to justify.
So, understandably, many startups aren’t keen on assigning their developers the extra work of setting up and maintaining automated e2e tests. The popular automation frameworks like Selenium, Cypress, and their derivatives require you to know their commands and syntax. Plus, maintaining tests within these frameworks means digging around in code to find the relevant DOM selectors. It’s a pain.
In some cases, startups consider outsourcing their test automation. But – given the underappreciated cost of test maintenance – they eventually discover the painful back-and-forth and bottlenecks of working with an external team to keep tests up-to-date.
This solution to this is a no-code approach, which removes these barriers. For example, we specifically designed Rainforest QA to make it easy to quickly create and maintain automated tests with no code – not just for the developers on your team, but for PMs, too. Everyone who decides what gets shipped can have control over quality.
Our visual test editor previews a live, interactive version of your app in a virtual machine. To create a test step, simply (1) select from one of the available actions (like "Click" or "Fill") and then (2) click-and-drag a box around the element in your app to apply the action to. That's it.
Here's what that looks like in action:
As you can see, everything is fast, human-readable, and intuitive. There’s no new framework to learn and no DOM selectors to keep track of, so there’s almost no learning curve and anyone on the software team can quickly create and update tests.
Sign up for Rainforest to get five hours of no-code test automation for free, every month.
Ultimately, improving software product quality can’t be solved by any one person, role, or tool. In this piece, we’ve specifically illustrated the risks for a startup of putting responsibility for quality on the role of QA.
So what’s the alternative for a startup that wants to improve product quality but would prefer to avoid spending money on additional headcount?
Otherwise, your team should dogfood your product – not just to find bugs, but to empathize with the typical user’s experience. That’s how the best improvements to quality happen, in the broadest sense of the word.
Over time, the QA industry has developed many great processes and best practices that drive software and business success. But some practices are outdated, while others have negatively impacted product success. In this post, we share the Top 5 Don’ts of Software Testing.
Today, we're sharing our approach to test writing and how you can use it to get better results from your QA tests.Crafting well-written test cases is critical to getting reliable, fast results from your manual QA tests.
In this post we explore how to identify and approach breaking the QA bottleneck to ensure that your testing process doesn’t slow down your release cadence.
When we think of Quality Assurance we typically think of Product and Engineering, but Digital Marketing teams own quality too. In this post, you will learn how incorporating a QA strategy into your digital marketing strategy leads to success.