Many teams encounter defect clustering and other quality issues as they build out a more complex product. In this post adapted from our guide 90 Days to Better QA, we’ll explore how to keep clusters of bugs from bringing down your quality. Bleacher Report’s Senior Automation Engineer Quentin Thomas also weighs in with a real-world approach to addressing for defect clustering.

What is Defect Clustering?

Bugs are not often distributed evenly throughout an application. Defect clustering simply means that a small number of features have caused the majority of quality issues in an application.

A range of culprits may be responsible for defect clustering, from legacy code prone to breaking, to newer features that are undergoing frequent changes, to a particularly fickle 3rd-party integration. Whatever the reason, the ability to identify defect-prone areas of your product is an essential.

“When I fix a defect, I change some code. Whenever I create or [change] code, there’s a probability that I will introduce a new defect (or three). When I fix these new defects, I change more code, which creates a higher probability of more new defects.”

Jason Gorman, Software Engineer

Are you affected by Defect Clustering?

There are a key indicators that you may be dealing with defect clustering:

  • You have a significant number of test cases, but issues still appear regularly
  • You have one or two “problem features” where bugs seem to crop up most frequently. This is the Pareto Principle; 80% of your issues come from 20% of your features.

Solution: How to Minimize Defect Clustering

It might seem obvious, but if an organization starts hunting around in its metrics and finds that the majority of issues revolve around a specific application, product feature or code base, then the greatest gains are going to be made if an improvement initiative focuses in on that specific software.

It doesn’t mean abandoning everything else in the interim, but instead repurposing a few extra resources and muscle to make a difference in the targeted technology.

For example, if the majority of customer complaints focus on one aspect of a product then a short improvement initiative would be well spent solely in improving quality around that aspect. It might mean borrowing product managers or some developers from other projects and re-assigning them for a month or two to build up test coverage or automation around that feature.

Real World-Fix: Take a Data-Driven Approach to QA Coverage

A big believer in data-driven quality improvement, Bleacher Report’s Quentin Thomas says the best changes are made based on facts rather than feelings. In one instance, he managed to take a trove of defect data from his organization to prove that the majority of issues were coming from an old code base that was only being lightly maintained. Rather than suggesting the organization pour more resources into coverage for that code-base, the data showed that maybe it needed to be abandoned altogether.

“The data gave us the ammo as QA to say “Hey, why don’t we just consider phasing it out, because it is causing us a lot of issues and that’s better than trying to spend all this time to test and analyze this stuff,” he says. “Sometimes getting rid of a service no one is maintaining is going to do a better job of improving quality than anything QA can do.”

Learn More about Troubleshooting Common QA Issues

Want to learn more about improving your QA process? 90 Days to Better QA explores common quality pitfalls and how to resolve them with software testing experts. Download the guide now for more on troubleshooting common QA issues and our roadmap for improving your software testing process in the next 90 days.