When you think about the quality of your QA strategy, is test coverage your primary metric for success? Does test execution time creep up as your product matures, as more and more test cases are added? Are bugs slipping through the cracks regardless of how many tests you run before release?
It’s tempting to conflate a large number of test cases with sufficient test coverage, but this is not always the case. Massive test case databases often translate to increased test management and execution time that yield diminishing returns on quality insights. According to the 2017-2018 World Quality Report, nearly a third of all organizations struggle with inefficiencies in their testing strategy, which results in increased QA spend and longer testing cycles.
If more tests isn’t the answer to better coverage, what can teams do to improve the quality of their software testing strategy?
Using data to drive your approach to test coverage aligns your QA strategy with business goals. By following the data, you'll be able to allocate your resources to greater effect and reduce wasted QA spend.
Using error reporting software to track defects by area of the product, developer, team and source of spec is a good start. This will often allow you to discover patterns. Any patterns found can guide process improvements and retrospectives with developers.
Many organizations struggle with device and OS coverage. The need to provide a seamless product experience across every possible combination of browser, device and network often results in a mess of edge case tests. But device fragmentation is making “total” coverage an ever-more unrealistic target. “We have seen 24,093 distinct devices download our app in the past few months. In our report last year we saw 18,796. In 2013 we saw 11,868,” reported OpenSignal in 2015.
The question is, at what point does device coverage provide diminishing returns? Instead of chasing after 100% coverage, QA teams should determine a set list of devices and browsers that they support, revisiting this list quarterly to update it. By focusing on the top 95% of use cases, development teams can cut out edge cases while maximizing the impact of their test cases.
As with determining device coverage, knowing what features and areas of your product to test can be overwhelming. As your user base scales, it can be hard to stay on top of how users are interacting with your product. Says Matt Sanders, Director of Engineering for SolarWinds’ Librato platform: "As we’ve grown, part of what’s changed is the level of contact we have with every customer. We have always tried to be very hands-on and helpful with our support. With a larger customer base, that dynamic has shifted -- just because we haven’t heard about it doesn’t mean that customers aren’t running into issues and churning.”
Again, looking at how customers are using your product -- alongside frequency of bugs -- can help narrow your testing scope to have the greatest ROI. Partner with the product and customer success teams to understand which areas of the product are most active.
In 90 Days to Better QA, Bleacher Report’s Sr. Director of QA Automation Quentin Thomas explained how taking a data-focused audit of their existing test base helped his team refocus and refine their QA approach, even revealing: “The data gave us the ammo as QA to say “Hey, why don’t we just consider phasing [this legacy feature] out, because it is causing us a lot of issues and that’s better than trying to spend all this time to test and analyze this stuff,” he says. “Sometimes getting rid of a service no one is maintaining is going to do a better job of improving quality than anything QA can do.”
At some point, every organization will have to ask how they can make their QA strategy more efficient (we did just that here at Rainforest QA!). Investing in finding the right fit for your QA strategy will help your team improve product quality while preventing your QA budget from becoming bloated and ineffective.
Want to learn more? Check out our webinar on rightsizing your QA approach, featuring Rainforest CTO Russell Smith and Onboarding Specialist Maddie Blumenthal, who will be discussing how they approach creating the right fit for QA and development teams ready to move faster.
Learn how to differentiate between various QA testing tools, how to decide which ones you need, and 30+ options to consider.
This guide to software regression testing answers the top FAQs about software regression testing.
In this post, we'll cover the most common types of web application testing, which ones to prioritize, and tools to use.
Learn how to write a software test plan. Get started with our practical guide and free template.