Key Metrics to Help You Track Your QA Success
Implementing a more strategic approach to testing can have a huge impact on product quality, but measuring exactly how your QA strategy has made a difference can be challenging. The long-term success of any QA strategy depends on measuring change and communicating that change to the team at large, so it’s important to measure the right metrics.
Many teams struggle to find the right QA metrics, which leads to frustrating disparity between effort and time invested in testing and the results shown. In this post, we’ll explore five key metrics that QA teams should track to make sure their quality improvement strategy stays on target.
Primary QA Metrics:
Number of Bugs
One of the most direct and essential measures of QA success is the number of bugs that reach customers. Log issues reported by (or which directly affect) customers with as much detail as possible, including date, product area, developer and team. Summarize this log on a weekly basis and report back to the team at the root.
Time to Fix
Tracking time-to-fix, or the amount of time between when something breaks and when it is fixed, is a critical measure of QA health. Time-to-fix provides insight into how effectively a development team is able to use the output from QA to triage and resolve bugs. The easiest way to measure this is the time between a failed build and the next passing build.
Want to take these key measurements to the next level? Split your issue tracking out by source. This level of detail will help you better understand the overall quality of the product, and identify weak areas in your process that need a boost. Examples of this include: external (i.e., customer), internal (i.e. missed by QA), automatic (e.g., error reporting), or test-case failures.
Secondary QA Metrics:
Broken or unreliable tests aren’t providing useful quality feedback to your team, so identifying poor quality tests is critical. If you’re using automated testing, make sure to track which tests pass or fail intermittently. Tracking test failures over time will allow you to identify the root cause of these failures, whether that’s poorly written tests, human QA tester issues or test environment failures.
While NPS is a great end-measurement for your entire product, it’s a trailing indicator. Because NPS takes a holistic view of customer satisfaction, it can be challenging to trace fluctuations in NPS to specific quality issues. As a result, NPS can be a useful indicator of overall quality, but it shouldn’t be considered your primary measurement for the success of your QA strategy.
Test coverage is one of the most popular measures of QA health, but it can be misleading and even dangerous if misunderstood. Test coverage by itself is not a measure of the quality or thoroughness of those tests. By relying too heavily on test coverage, teams can easily throw their effort into chasing down endless edge cases or overtesting features that aren’t mission-critical. Use test coverage to work out which areas are being completely untested and to determine where your team should focus their efforts most effectively.
Finding Better Measurements for QA Success
Refining and reevaluating how you measure your QA process and output is an important component of ensuring that your team continually hits a high bar for product quality. Want to learn more about creating a data-driven QA process? Join us for our upcoming webinar “5 Essential Quality Metrics that Matter.” In the webinar, we will discuss how fast-moving teams can avoid the common pitfalls when it comes to measuring the success of their QA strategy, and how they level-up the quality of their product by taking a data-driven approach.Filed under: time-to-fix, test coverage, continuous testing, testing automation, nps, and qa metrics