No two QA processes are quite the same, but there are a few common quality assurance pain points that many teams struggle to resolve. One of the most frustrating for teams is when QA slows down -- or even stops -- deployment velocity.
In this excerpt from 90 Days to Better QA, we explore how to identify and approach breaking the QA bottleneck to ensure that your testing process doesn’t slow down your release cadence.
There are a number of reasons that can cause QA to block deployment. While it can be easy to assume that the QA team simply needs to test faster, the root cause of a lagging QA process is usually much more than that.
For some organizations, a lack of alignment between product, development and QA teams early in the software development lifecycle can lead to an inefficient test creation workflow.
In other cases, lack of confidence in the reliability of the test suite can cause the QA team to overtest before every release to ensure the release does not cause issues for users.
Whatever the cause, it is essential for QA leaders to take a step back and evaluate their testing process before deployments, taking a look at everything from how teams determine what to test, to the testing execution methods used by the team.
Moving from manual to automated testing should be an overarching goal. But that is a long-term goal that takes tons of incremental improvement to achieve.
As your team works toward it, it’s imperative to look for stopgap measures that improve velocity without having to wait for a year-long push to build out an automated test framework. Additionally, even already highly automated shops can find that velocity is impeded if the automated tests are extremely brittle. In that case, improving the reliability of testing and instituting stopgaps is crucial to keep the gears moving.
Enterprise infrastructure management firm SolarWinds leans heavily on its real-time SaaS operations analytics solution to help clients understand what’s going on with their IT data. The development team averages 20-25 pushes per day to keep Librato continuously improved. But the highly visual platform was challenging the limits of their existing automation capabilities.
“As we scaled up, we found that we had a lot of challenges with our existing automated CI tools, both in terms of how long it took to run those tests, and the overall stability of those test suites,” says Matt Sanders, Director of Engineering for SolarWinds.
As a result, the organization still needed to depend on manual testing that didn’t impede velocity but also kept risks minimized. They turned to Rainforest to fill the gap for on-demand testing.
“For a long time it was common that when we made changes to our visualization layer, we would roll it out only for our team, wait 2 or 3 days, and then turn it on for everyone else,” Sanders says. “We tend not to do that anymore because at this point we feel like the feedback loop is fast enough that if we break something we’re going to find out pretty fast. Now that we’re using Rainforest as a safety net, it’s more acceptable to move fast.”
90 Days to Better QA draws lessons from QA experts and real-world case studies to provide a guideline for creating change. In this guide, you’ll learn how to kickstart your QA strategy, including how to create an actionable 90-day roadmap and advice from QA experts on identifying and resolving common quality issues. Check out the guide to learn how to kickstart your QA strategy.
Learn how to differentiate between various QA testing tools, how to decide which ones you need, and 30+ options to consider.
This guide to software regression testing answers the top FAQs about software regression testing.
In this post, we'll cover the most common types of web application testing, which ones to prioritize, and tools to use.
Learn how to write a software test plan. Get started with our practical guide and free template.