At Rainforest QA we’ve spent 6 years helping hundreds of customers to QA more effectively, and advising hundreds more on their QA strategies. And yet despite so many different customers, we’ve seen one just common failure mode: the boom and bust. How can your company avoid this? Read on for our perspective. First off - what is the boom and bust of QA?
The boom and bust of QA is that after the initial excitement wears off, your product and dev teams will find it hard to prioritize maintaining test cases over writing features for customers. We’ve seen this at tiny startups, where the initial automation efforts break down over time as the team fails to keep the test suite up to date. We’ve also seen this at behemoths like Salesforce and LinkedIn, who walked back from ‘total automation’ efforts in light of the massive development effort required to maintain the test suites (the word on the street is that eventually half of Salesforce’s dev headcount was focused on test automation). So how does it happen?
New testing initiatives are often triggered by a high-profile fail. Maybe your biggest customer runs into a critical bug. Perhaps your sales team loses a big deal because the product doesn’t work properly. Maybe you accidentally influenced an election by having poor access controls on your data. Whatever happened, this tends to drive a lot of focus on quality and QA processes, usually from the top down. This translates into motivation. The team is pumped to solve the problem, and resources get prioritized accordingly. We thus enter the boom phase of quality.
The team is motivated to solve the problem. Better quality software is easy to get behind. And when quality is a priority it’s natural to spend energy on addressing the issue. During this phase you’ll have dev teams in problem-solving mode, figuring out how to automate and solve the quality issues with process and platforms. You’ll also find that other team members will start testing in a reactionary, unrepeatable fashion just to try and put a bandaid on the wound. You’ll likely have a conversation with the rest of the organization about the culture of quality, and the norms needed to move the quality level forward. In general, quality is top of mind and the team is happy to balance building tests against investing in feature dev.
Inevitably, things start to break down over time. Two things happen.
First, your organization’s focus on quality and QA starts to bear fruit, and complacency sets in. “Our quality is awesome!” Your team inevitably starts to move onto other things. We’ve often seen that when you hire QA, quality efforts by your Dev team tend to fall by the wayside because QA becomes the “mistake finding” team and individual checks aren’t consistent on the eng side.
Second, your engineering managers will find it hard to consistently prioritize test suite maintenance over building features for customers. Ultimately this comes down to value creation. You want your product and dev teams writing code and shipping. Anything that isn’t shipping new product to customers is a distraction, and therefore represents opportunity cost. In the Continuous Delivery world that software is trending towards, more and more the same teams build and test, which is where this trade-off becomes problematic. As an industry, what we see is that quality (and thus QA) is rarely measurable enough to drive clear internal accountability, which makes it easy to de-prioritize.
When the core job of a software company is to ship value to its users, and when measuring quality objectively is rare, it can be hard to consistently prioritize tasks like QA.
Finally we get to the bust. This stage happens when enough of your test suite is out of date that the dev team stops trusting the results. Maintaining the test suite has dropped off the prioritization list, and a significant portion of your test suite is now out of date and therefore broken. When the trust disappears, your QA approach is dead. Developers either now ignore the test suite, or consider it as signal rather than truth. One large software company that you’ll know ended up with such a flaky automated test suite that they ran it 10 times in parallel, then averaged the passes and fails on a per test basis to try and generate signal from a broken system.
This is usually the time when your team starts saying things like “our QA sucks” or “we don’t have QA,” and when your leaders start saying things like “our devs are responsible for quality” and “we don’t believe in QA.”
The first step to solving it is to recognize the root cause of the problem - misalignment of incentives. For a project to happen inside an organization, someone needs to be responsible and accountable for solving it, as well as empowered by leadership through culture. We often see that engineering owns quality, but isn’t accountable for quality. The best way to address this is create an agreed upon quality standard (we like coverage, but there are other decent measures) and hold one of your leaders accountable to maintaining or improving that measure over time.
Next, recognize that any testing assets you create represent technical debt, because they are coupled to the application under test. Whenever the application changes, the test suite must change correspondingly. In our experience, this is the most often overlooked cost of doing QA - often to the extent that engineering leaders will frame automated testing, for example, as “free.” Free it is, if your developers’ time is worthless.
Finally, embrace the reality of the feature testing lifecycle, and choose a diverse QA strategy by matching each phase with the appropriate methodology for your business.
And obviously, if this resonates I'd encourage you to check out the platform that we've built - we think it's the best way to sidestep the boom and bust.