Check out any LinkedIn community, online forum, or the comments of an article about best QA practices, and you’re bound to see posts foretelling the doom of manual QA testing, advice on how to automate more of your testing, and questions on getting from manual to automated testing.
That paints a pretty strong picture about the perceived value of manual QA in the testing world — it’s low tech, time consuming, doesn’t scale well and should be abandoned as soon as you can afford to do so. It’s easy to assume that because automated testing is faster, more scalable, and less labor intensive (once it’s up and running), companies who are serious about QA should toss their manual methods as soon as they can.
But that’s not the whole story.
While it’s true that manual testing can be a bottleneck when it comes to fast-moving development, that doesn’t mean you should abandon it entirely. A comprehensive testing strategy should include manual testing to better represent the human users of the product.
The tricky part? Figuring out how to leverage the benefits of manual QA without slowing down development.
Let’s start by defining what we mean when we talk about manual QA. One of the most straightforward ways to categorize QA activities is to think in terms of checking and testing. When people think of QA testing, they’re really thinking about a wide range of activities that make up a solid QA strategy, most of which can be broken into these two categories of checking and testing.
Checking | Testing ------------- | ------------- Tasks that evaluate whether a product or feature performs the way you think it will in concrete ways. | Process of exploring and experimenting with a product to learn more about it, from its limitations to how you can improve it. E.g. Checking that when a link is clicked on a webpage, a new tab opens and the linked URL loads. | E.g. Determining whether links on a webpage are easy to identify and interact with.
In short, checking confirms what you already expect from a product and gives a binary result (pass or fail). Checking is an important QA activity, but most of the time it can be performed quickly and effectively via automated testing. In a CD environment, manual checking is slow, clunky, and incompatible with moving fast. However, because testing is a more abstract, exploratory process that often lacks clear goals and objectives, automating complex tests can be less efficient and less useful than running tests manually.
When it comes to exploratory and interface testing, humans still beat machines by a long shot. While we’re making big strides with machine learning, having a human tester poke around a product to see what they discover is still one of the best ways to truly test the quality of a piece of software. After all, users are real people, so why not test with real people, too?
These unscripted exploratory tests can mean the difference between shipping a product that should work fine, and a product that actually works. Usability can be a serious roadblock to adoption, and testing a feature for acceptance is a critical aspect of QA. Manual testing is critical because it helps you test the product from the perspective of a user, making sure that by the time it hits your customers, it’s ready for them.
If you're at all familiar with QA, you've probably seen the ideal QA-testing pyramid, which helps visualize how you should focus your testing efforts to optimize efficiency. In a perfect world, your software testing schema would look something like this:
Image from watirmelon.com
In this model, most QA tests, especially repetitive ones, are automated. As a result, your QA team spends most of its time managing these automated testing suites. Manual tests are used sparingly, reserved for cases for which having a human perspective is most valuable. Functional testing and usability testing, for which QA tests must mirror the user’s experience as closely as possible, absolutely require manual testing.
But in practice, the pyramid isn’t always easy to achieve. When it comes to mature products and features, manual testing can end up being a slow, resource-hogging process. A survey conducted by IDT found that almost half the companies surveyed spend up to 50% of their time on software testing, and as much of 90% of that time is spent on manual QA tasks — not quite the pointy tip of the pyramid we’re aiming for. Alastair Scott has dubbed this inversion the “ice-cream-cone anti-pattern.”
Image from watirmelon.com
For many companies who adopt CI/CD, testing ends up being a bottleneck, with bloated manual testing activities slowing processes and taking time away from building and maintaining more efficient testing-automation suites. To avoid the ice-cream cone and flip the pyramid back on its base, DevOps and QA teams have to push for more organized testing processes and more automation.
Unfortunately, if you’re doing agile or CD, chances are you don’t only have mature, stable products. For fast-moving companies, manual testing is more flexible and faster to implement than automation. Often, QA processes in CD environments have to include a little more manual QA than their slower development cycles use — but that’s not always a bad thing.
So manual QA isn’t going anywhere, but does it have to be so slow? Manual QA needs a makeover. Tools like the Rainforest platform allow you to crowdsource manual QA testing so that you benefit from having human testers work on your features without eating up your QA team’s time. By treating outsourceable manual tests more like automated tests, you can get all those manual tests done without slowing production.
Have we convinced you that manual QA can be a part of a fast, efficient QA strategy? Check out our ebook on Scaling QA without Scaling Your QA Team for real-world tips on developing a faster QA strategy and leveraging manual QA tests more effectively.