Many software companies have no formal quality assurance strategy, and those that do take one of two inherently flawed approaches: Either they (a) ask developers to do QA or (b) delegate QA to a siloed team, whether it’s internal or outsourced.
Having been in the QA space for a decade, we’ve learned that both approaches are deeply flawed. As we’ll elaborate below, both options create misaligned incentives, lead to finger pointing, waste expensive employee time, reduce job satisfaction, and slow down release schedules.
These problems are exacerbated in software companies running continuous integration and delivery (CI/CD) operations where QA testing has to be done continuously. In that case, if developers own QA, they are often incentivized to do as little QA as possible to ship more software, faster. Or, if QA is managed by a standalone team, organizational battles can happen thanks to the tension between QA wanting to slow down and get it right versus engineering wanting to ship faster.
But after years of building our own software and helping companies implement testing automation, we’ve realized that there is a third option, made possible by no-code test automation. When you remove technical barriers to software testing and make it truly accessible, you allow the people with the highest stakes in shipping bug-free products--product managers, designers, QA analysts, and others--to own quality. There’s no longer an isolated QA team begging for more transparency into product changes so they can properly update their tests in a timely manner. Plus, thanks to the inherent speed of automation, developers can continue to release as fast and frequently as they want.
In this article, we’ll explain why the two standard approaches to QA in software development never work in the long term, and demonstrate why and how we built Rainforest QA’s no-code solution to empower the right people to take ownership of QA.
Since developers already run unit tests as part of their development process, it makes sense on the surface to put them in charge of QA. Many companies do take this approach. To be fair, having developers own QA does keep them in charge of the release workflow, which is essential in a continuous delivery pipeline.
But we believe this approach is inherently flawed because of misaligned incentives, and these flaws outweigh any benefits of having developers own QA.
Developers are typically evaluated based on the quantity of software they ship, and how fast they ship it.
Even when developers are tasked to do QA, they’re still primarily evaluated based on speed. So, when they’re forced to make trade-offs between speed and the amount of testing done to increase assurance about quality (“test coverage”) , they’re incentivized to favor speed.
Every minute spent doing QA is a minute not spent writing code, which means that doing QA becomes an annoying chore in the developer’s workday. The more QA they do, the less software they can ship. In practice, this means they’re incentivized to only run the bare minimum number of tests to prove that the code is technically sound.
As a result, when developers are doing QA, they tend to run what are called “happy path” tests. Does the code work when everything is entered properly into the form? If so, move on. They don’t have time or incentive to test weird edge cases, like a name with non-Roman characters or a bad credit card number. And there’s little chance that they’ll be able to step back and ask if the end-to-end user flow in the product even makes sense, because they’re focused on shipping their one section of the product’s code.
Expanding upon one of the previous points:, we’ve seen that most developers just don’t enjoy doing QA. They see it as repetitive drudge work that doesn’t make the use of their most valuable skills. In the best cases, having developers do QA creates organizational inefficiencies because highly-paid employees are doing something that less-expensive employees could do. In the worst cases, it leads to turnover because job satisfaction goes down.
We’ve seen software teams try this approach again and again, and the end result is always the same. The developers’ capacity and enthusiasm for doing effective QA wanes over time, and they stop maintaining the tests. When the test suites become stale, they return bad results. So, the team loses faith in the tests and starts relying on ad hoc manual tests. Inevitably, this approach leads to a quality crisis that threatens the success of the product.
When companies have a major quality crisis—like an important customer canceling a deal due to product instability—the most common reaction from companies is to flip to the other standard QA approach.
If their developers were previously doing QA, then they typically switch to a siloed QA approach by either (a) creating an internal, siloed QA team or (b) hiring an external QA vendor. In either case, they create a situation where the people who own QA are isolated from the rest of the product development team.
In theory, having a dedicated QA team would help solve the issues above with misaligned incentives and resource allocation. QA specialists bring a different perspective to software testing than developers, as they tend to focus more on the overall customer experience than just whether the software is technically sound.
Indeed, unlike developers, QA specialists are measured based on the amount of quality assurance work they do (e.g., the number of bugs found), not the quantity of software created.
The biggest problem with this model comes from isolating ownership of QA to just the QA team. On the surface, that sounds reasonable (“Shouldn’t the QA team own QA? Why is that bad?”), but when you separate QA from all other aspects of product development, you end up creating a silo where QA is the only group that is incentivized to care about quality. No one else’s job description involves QA because there’s a separate team that owns it. It leads to situations where, for example, developers know they’re not the last line of defense against bugs, so they’re potentially less conscientious about the cleanliness of their code. Subsequently, more debugging gets added to the development process, and tech debt adds up since there’s not enough bandwidth to bash every bug.
In addition, and perhaps most importantly, a siloed QA team puts QA in charge of the release schedule, instead of the developers. This leads to the second problem with having a siloed QA team.
In the siloed QA setting, the strength is the people doing QA: dedicated specialists who think about the product critically from the customer’s perspective, find bugs, and improve the product overall. Seemingly, these people are the ideal ones to manage product quality.
But, often, QA isn’t included in the decision-making process about product changes. Because they’re not invited into product discussions, they have no ability to anticipate product changes, so new features invariably break their test cases. The QA team then has to pause release of new features until they can update those tests.
If the product team and developers decide releases can’t wait on QA to catch up, more features are released, breaking more tests. This puts QA even farther behind as more bugs make it into production. Through no fault of the QA team, the organization loses faith in the QA process because the quality of the product is perceived to be worsening.
Even when QA is looped into the product specification and roadmapping process, it takes time to create and update the tests that validate new and changing features. Indeed, the ongoing test updates--or “test maintenance”--required to keep up with evolving features is often an underappreciated cost of executing QA.
Time spent creating and maintaining test cases is time spent not testing product releases. When QA comes last in the development process, that means QA can become a bottleneck for the developers trying to deploy code.
The nature of software development naturally includes some trade-off between speed and test coverage But in the siloed QA model, developers become focused on speed and QA specialists become focused on test coverage. The developers might say, “We’re not shipping enough because QA is too slow,” and QA might say, “Well, we could ship faster if they didn’t write so many bugs.”
This kind of contention between developers and QA only leads to losing outcomes for the company: If developers eager to ship code without full QA approval get their way, product quality suffers. If QA successfully bottlenecks releases to assure product quality, developers naturally get frustrated and consider leaving.
Ultimately, the siloed approach to QA fails because it creates a lose-lose power struggle between people who primarily want to ship fast and people who primarily want to ship with quality.
We’ve established that the two common approaches to QA--owned by developers and owned by siloed QA personnel--are ultimately unhealthy for product development and for the company.
In the real world, many people in the company care about product quality, but not all roles are incentivized to prioritize quality over other goals. (Which is a big part of explaining why developers shouldn’t exclusively own quality.)
So, who should own product quality? Where do the incentives align?
We think you should look to roles that are already held accountable for the business outcomes of the product.
In our experience, the best results come when product teams own QA.
We consider product teams to include roles that are evaluated on business outcomes created by the product. That is, the roles incentivized to care about customers having a successful product experience:
These are the people in the company already spending most of their time thinking about what’s right for the product: for the feature roadmap, the user flows, and the product design. When they have direct control over quality assurance, the people with the best set of information can make the right trade-offs between speed and test coverage to meet business goals.
Also, like many of the most skilled QA specialists, product team members bring their product expertise and real-world considerations to the testing lifecycle. When they test new features, they don’t just run “happy path” tests to make sure the software works in the most basic scenarios. They evaluate whether the end-to-end user experience makes sense, works properly, and helps customers be successful.
So why don’t most software companies assign QA responsibility to product teams? The simple answer is that without no-code test automation, it’s just not feasible.
The most popular test automation solutions on the market—Selenium and Cypress—require a dedicated QA engineer or developer to code every new test and every test update in response to product changes. These code-based automation solutions prevent more of the company from taking ownership of QA
Specifically, in many software companies that haven’t matured to the point of having a formal approach to QA, product managers often serve as de facto QA. Many of these product managers don’t know how to code, and it’s not a worthwhile investment of their time to learn. Even if they do know how to code well enough to write tests in Selenium or Cypress, they don’t have time to write and maintain tests while also successfully performing everything else expected of their role.
As a result, what we’ve seen time and again is product managers scrambling to manually test new features whenever the team releases updates. When they find bugs, they fire off Slack messages or emails to the developers.
Of course, this manual, unstructured approach is unsustainable for many reasons, but two worth noting are:
If the product team members had the right tools to contribute to QA before updates go live, they’d be an ideal fit for the job.
To empower the right people to own QA—that is, the people who are in the best position to make the right tradeoffs between speed and quality—you need an automated testing platform that doesn’t require engineering skills. You need a solution that:
When technical barriers are removed, allowing anyone to participate in the QA process, we call the resulting democratization of QA “accessible quality”. We’ve built it at Rainforest QA, and we’ll explain how in a moment.
Accessible quality: anyone who cares about quality can create, update, and run automated tests without a technical background.
Without accessible quality, if you aren’t a developer or a QA specialist, you have much less of a meaningful ability to contribute to quality. But with accessible quality, a software company can distribute ownership of quality to all of the people who want or need to be involved with QA. The people who have the best insights about the product and are responsible for its outcomes can manage testing according to the best trade-offs between speed and test coverage.
They can create, update, and/or run end-to-end tests whenever they make changes to the product, even as early as the product/feature design process. And they can easily interpret test results to make informed decisions about how to prioritize bug bashing and other product improvements.
Rainforest QA takes a no-code, visual approach to making automated testing accessible to anyone. Whether you have a coding background or not, it’s fast and easy to create and update tests, to run them with our proprietary automation, and to quickly interpret test results.
For example, here’s a test created in Rainforest QA’s visual editor to validate the signup functionality on the www.rainforestqa.com website:
You create (or edit) every test step in the visual editor with a click or drag-and-select of the mouse. As you can see, anyone looking at the set of steps on the left can understand exactly what’s going on in the test.
Often, you’ll want to re-use a set of test steps within other tests. For example, it’s common to start many tests with a login or signup flow. In Rainforest, when you’re creating a new test, you can simply embed an existing test into the new test to save yourself the trouble of recreating the same steps over and over. In this case, I’ve selected our original “Rainforest Signup Flow” (from the example, above) to embed in this new “Create a First Test Flow” test:
Any time I modify the steps in the “Rainforest Signup Flow” test (i.e., in response to relevant changes in the product), those modifications will automatically propagate to all the other tests containing those steps as an embed. Compare this to code-based automation solutions like Selenium or Cypress, where a coder would need to manually update the code across all the relevant tests any time the product changes.
Running a test (or group of tests) against Rainforest QA’s automation is as quick as selecting the test environment and browser/platform:
Within minutes, you can get a set of test results that are easy for anyone to interpret. Every test result includes a video recording of the test that the automation performed, so you can see exactly where something went wrong. Click a button to add tickets directly to JIRA, or download HTTP logs for your engineering team.
In this case, the test failed the step that looks for the presence of a signup success message. Watching the video playback, we see that the failure resulted from a CAPTCHA preventing a successful signup:
Incidentally, as wonderful as automation is, not every test case can be automated. This CAPTCHA presents a classic, non-automatable scenario: by definition, only humans can pass CAPTCHA tests.
For tests that require human judgement and/or subjective feedback, Rainforest QA also offers a worldwide community of human testers who are available on-demand, 24x7 to manually run your test cases. Any test that you create in Rainforest QA’s visual editor can be run against our automation or against the tester community:
The accessible quality model of QA doesn’t completely align incentives unless it also allows developers to release code quickly and frequently, as in a CI/CD process. Manual execution of test cases usually isn’t fast enough for CI/CD; automated execution of QA tests is a prerequisite for modern software deployment practices.
That’s why our definition of accessible quality also includes consideration of test automation.
Anyone using Rainforest QA can execute automated tests directly from the browser. But for true continuous QA testing, developers can kick off tests using our API, CLI, or CircleCI integration.
It’s not just our automation that makes running test cases on the Rainforest QA platform really fast.
No matter how many test cases you run at a time in Rainforest QA, those tests run in parallel. The time to get your test results is only as long as it takes the automation to run your single longest test case. That’s how we return test results back to our customers in less than four minutes, on average.
And there’s no limit to how many automated test cases you can run at a time, because we run all tests on our cloud of virtual machines (VMs) -- for practical purposes, it’s infinitely scalable. Access to our cloud of VMs is built into the Rainforest QA platform, providing support for 40+ browsers and platforms. Everything you need is available in the platform. You don’t have to procure and maintain any testing hardware, and you don’t have to pay for access to a grid of devices through services like SauceLabs or BrowserStack.
Side note: Any batches of tests that you send to our tester community run in parallel, too, so you can get test results from manual tests unusually quickly -- in 17 minutes, on average, for Rainforest QA customers.
Tests created in code-based automation solutions like Selenium and Cypress evaluate what’s going on in the code behind the scenes of your product’s UI. That is, these solutions test what the computer sees, not what your users see.
Subsequently, Selenium and Cypress tests can be “brittle” when it comes to minor, behind-the-scenes code changes that don’t even change the user experience. That is, tests can incorrectly fail, requiring time-consuming test maintenance. For example, if the HTML ID attribute of the signup button in the example above changes, a Selenium or Cypress coded test that identifies that button by its ID would likely break.
We built Rainforest QA to test software and apps the way the end user does: by interacting with the final UI of the site or app. That means each step of a Rainforest QA automated test is based on identifying or interacting with elements visually. It works by pixel matching and using AI to find and interact with (click, fill, select, etc.) the correct elements. This means Rainforest QA tests don’t break when there are minor, behind-the-scenes code changes that don’t change the UI.
The conventional approaches to QA--owned exclusively by developers and owned exclusively by siloed QA teams--are ultimately bad for software product quality, and therefore bad for the companies and teams that care about quality.
We believe in an approach based on accessible quality, in which incentives are aligned and the right people in the organization--people on product teams who are evaluated based on product outcomes--are empowered to own QA.
That empowerment starts with democratizing access to the power and speed of test automation with no-code solutions like Rainforest QA. (You can sign up for a 14-day free trial.) When anyone can quickly and easily create, update, and run automated tests and triage test results, people with the right product insights can make the best trade-offs between speed of product development and test coverage. Developers can push code as often as they want, and customers enjoy the optimal product experience according to their best advocates within the company.
Learn how to differentiate between various QA testing tools, how to decide which ones you need, and 30+ options to consider.
This guide to software regression testing answers the top FAQs about software regression testing.
In this post, we'll cover the most common types of web application testing, which ones to prioritize, and tools to use.
Learn how to write a software test plan. Get started with our practical guide and free template.