At Rainforest QA we’ve spent years helping hundreds of customers to QA more effectively, and advising hundreds more on their QA strategies. And yet despite so many different customers, we’ve seen one just common failure mode: the boom and bust cycle.

boom and bust of qa

The boom and bust of QA is that after the initial excitement wears off, your product and dev teams will find it hard to prioritize maintaining test cases over writing features for customers. We’ve seen this at tiny startups, where the initial automation efforts break down over time as the team fails to keep the test suite up to date. We’ve also seen this at behemoths like Salesforce and LinkedIn, who walked back from ‘total automation’ efforts in light of the massive development effort required to maintain the test suites (the word on the street is that eventually half of Salesforce’s dev headcount was focused on test automation). So how does it happen?

How it starts: reactionary QA

New testing initiatives are often triggered by a high-profile fail. Maybe your biggest customer runs into a critical bug. Perhaps your sales team loses a big deal because the product doesn’t work properly. Whatever happened, this tends to drive a lot of focus on quality and QA processes, usually from the top down. This translates into motivation. The team is pumped to solve the problem, and resources get prioritized accordingly. We thus enter the boom phase of quality.

The boom phase

The team is motivated to solve the problem. Better-quality software is easy to get behind. And when quality is a priority it’s natural to spend energy on addressing the issue. During this phase you’ll have dev teams in problem-solving mode, figuring out how to automate and solve the quality issues with process and platforms. You’ll also find that other team members will start testing in a reactionary, unrepeatable fashion just to try and put a band aid on the wound. You’ll likely have a conversation with the rest of the organization about the culture of quality, and the norms needed to move the quality level forward. In general, quality is top of mind and the team is happy to balance building tests against investing in feature dev.

The breakdown

Inevitably, things start to break down over time. Two things happen.

First, your organization’s focus on quality and QA starts to bear fruit, and complacency sets in. “Our quality is awesome!” Your team inevitably starts to move onto other things. We’ve often seen that when you hire QA, quality efforts by your dev team tend to fall by the wayside because QA becomes the “mistake finding” team and individual checks aren’t consistent on the eng side.

Second, your engineering managers will find it hard to consistently prioritize test suite maintenance over building features for customers. Ultimately this comes down to value creation. You want your product and dev teams writing code and shipping. Anything that isn’t shipping new product to customers is a distraction, and therefore represents opportunity cost. In the continuous delivery world that software is trending towards, more often the same teams build and test, which is where this trade-off becomes problematic. As an industry, what we see is that quality (and thus QA) is rarely measurable enough to drive clear internal accountability, which makes it easy to de-prioritize.

When the core job of a software company is to ship value to its users, and when measuring quality objectively is difficult, it can be hard to consistently prioritize tasks like QA.

The bust phase

Finally we get to the bust. This stage happens when enough of your test suite is out-of-date that the dev team stops trusting the results. Maintaining the test suite has dropped off the prioritization list, and a significant portion of your test suite is now out of date and therefore broken. When the trust disappears, your QA approach is dead. Developers either now ignore the test suite, or consider it as signal rather than truth. One large software company you know ended up with such a flaky automated test suite that they ran it ten times in parallel, then averaged the passes and fails on a per test basis to try to generate signal from a broken system.

This is usually the time when your team starts saying things like “our QA sucks” or “we don’t have QA,” and when your leaders start saying things like “our devs are responsible for quality” and “we don’t believe in QA.”

How to avoid the “boom and bust” QA cycle

The first step to solving it is to recognize the root cause of the problem — misalignment of incentives. For a project to happen inside an organization, someone needs to be responsible and accountable for solving it, as well as empowered by leadership through culture. We often see that engineering owns quality, but isn’t accountable for quality. The best way to address this is create an agreed-upon quality standard (we like coverage, but there are other decent measures) and hold one of your leaders accountable to maintaining or improving that measure over time.

Next, recognize that any testing assets you create represent potential technical debt, because they are coupled to the application under test. Whenever the application changes, the test suite must change correspondingly. In our experience, this is the most often overlooked cost of doing QA — often to the extent that engineering leaders will frame automated testing, for example, as “free.” Free it is, if your developers’ time is worthless.

Finally, embrace the reality of the feature testing lifecycle, and choose a diverse QA strategy by matching each phase with the appropriate methodology for your business.