If you're looking to scale up your manual software testing without hiring a whole team of in-house testers, there are several outsourced software testing services that use crowd testers to provide affordable results.
Many of these providers look similar on the surface—most offer exploratory testing, some version of scripted testing, and claim to integrate into your team's workflow. But there are four key factors to consider to determine which solution is best for you:
In this post, we’ll evaluate four of the most popular crowdtesting services based on this criteria, starting our solution, Rainforest QA.
Want to scale up your manual testing without adding headcount? Sign up for Rainforest QA for 24/7, on-demand access to manual testers from our worldwide community of QA specialists. You can try our manual testing service free for 14 days; after that, it’s only $25/hour.
Unlike traditional outsourced testing services that handle most aspects of your quality assurance (QA) through an account manager, Rainforest QA offers a unique blend of software and service by providing thorough test coverage with the speed of automation and the intelligence and judgment of humans.
This means your team can take full ownership of your QA strategy and handle more manual testing without hiring extra headcount. Here’s what that looks like in practice.
Most testing experts agree you need a QA strategy and test plan in place before writing any tests. This is to help ensure you don’t end up wasting time on things like testing edge-case workflows that just aren’t that important to the business.
How quickly you can get started with your first run of manual tests will largely depend on who writes your QA strategy, test plan, and the actual tests.
A lot of manual testing services will take you through an onboarding process where they get to know your team and quality assurance goals, and then they design your QA strategy and test plan for you. By outsourcing your QA strategy and test plan, you’re trusting that someone outside your team will be able to understand your goals and process well enough to provide valuable testing results.
We’ve found that the process moves much quicker and provides far more valuable results if you design your QA strategy and test plan internally. (We go into more detail about how to do that in this article on QA test strategy and this article on how to build a test plan.)
Of course, if you’ve never developed a QA strategy before, it pays to have expert guidance. For our customers who unlock premium Professional features, our customer success team offers 1:1 help to provide best practices. Our Enterprise customers get a dedicated QA consultant who works closely with them to develop a QA strategy and implementation plan.
In Rainforest, you won’t have to wait to start improving your test coverage because of a long onboarding process. Once you sign up, it only takes a couple of minutes to write your first test and begin a test run.
In Rainforest QA, you can write test scripts using the Plain-Text Editor or our Visual Editor, depending on your preference.
The Plain-Text Editor allows you to write out test instructions in plain English. Each step will have two parts: a tester instruction and tester confirmation.
The tester confirmation should always consist of a simple yes or no question to validate the element being tested. We also provide guidance and tips for writing tester instructions. All of this is to help mitigate any miscommunications about what should be tested and the desired results.
The Visual Editor lets you write no-code test scripts in the same way you would for Rainforest test automation. This consists of choosing an action (i.e. click, type, wait, observe, or fill), taking a screenshot of the element you want to apply the action to, and/or choosing how to apply the action (i.e. ‘fill’ ‘screenshot’ with ‘random email’ or ‘wait’ for ‘3 seconds’).
Note: Check out this 4 minute video for a detailed walkthrough of how to write a test in Rainforest QA.
Any test scripts written in the Visual Editor can be automatically converted into instructions for manual testers to follow.
Once you’ve written your test script, you can send it to the Rainforest community of testers with the click of a button in the Rainforest platform or via the API or CLI. Tests that are sent to the tester community at the same time will be run simultaneously.
Any test written with the Visual Editor can be run using automation or human testers. This is useful for features that are still in development and are experiencing frequent changes but will eventually be a good fit for automated testing.
Once the application is stable and ready for test automation, you can simply update your existing tests, as needed. Updating tests written in the Visual Editor will generally be faster than re-writing all of your tests from scratch.
Whether you use the Plain-Text Editor or the Visual Editor, no matter how many tests you submit to our crowd of QA specialists at a time, all of your tests will get executed simultaneously (i..e, “in parallel”). That allows us to return manual test results in an average of just 17 minutes after submission.
While many manual testing services claim you get results within minutes, this usually means within minutes after the tester begins the test—not within minutes of you submitting the test.
At Rainforest QA, our specialists are available 24/7, even on holidays, so you get the fastest results possible right when you need them.
Most manual app testing services offer ‘in the wild’ testing, meaning the manual testers are using their own devices to execute tests. This may sound useful at first—‘real users with real devices’—but this approach can affect the reliability of your software testing.
Functional software testing is like a bacteria culture test in a laboratory. Your sample has to be placed in a perfectly clean environment for you to be confident that any bacteria that grows was present in the sample and wasn’t introduced by outside factors. To prove those results through repetition, the same contamination-free environment has to be used every time.
In the same way, software testing needs to be done in the same clean environment every time to get consistent, reliable results and to make it easier to reproduce bugs—which is nearly impossible using ‘in the wild’ testing.
All Rainforest QA testers use our network of cloud-based virtual machines to execute tests. This means every test is run in the same environment, free from any unpredictable outside factors, such as pop-up blockers, browser security settings, or outdated operating systems. Through our virtual machines, all of our testers have access to the major browsers (including the latest and older versions of Safari, Edge, Chrome, Firefox, and Internet Explorer) and the latest and older operating systems including Windows, macOS, iOS (on iPhone and iPad) and Android (on phones and tablets).
Note: Rainforest QA does not support hardware testing (GPS, recording and playing back audio and video, etc.). However, our customers have found they can easily cover these instances in-house because Rainforest QA makes all other types of testing much faster and more reliable.
While most manual testing services offer some form of scripted testing (such as regression testing) and unscripted testing (also called exploratory testing), the reliability of these tests will vary depending on whether the same set of testers are used each time.
At Rainforest QA, to give you the best results, we give you different testers for every set of scripted tests, but you get assigned a group of four dedicated testers for all your exploratory testing. (Note that exploratory testing is only available on our Enterprise plan.)
Getting a new set of testers for each scripted test run helps avoid what we call ‘nose blindness.’ Nose blindness in software testing is when a tester has gone through the same rote, repetitive test steps so many times that they expect to see particular patterns, which causes them to miss small (but significant) changes in the content or behavior of the app.
If a tester is seeing the test for the first time with fresh eyes, then they’ll be more likely to notice even small discrepancies in the software.
On the other hand, exploratory testing is better suited for testers who are already familiar with your application. Because the goal of exploratory testing is to uncover bugs found along atypical user paths, the testers first need an understanding of what typical user paths look like. This takes time and familiarity with the application, which will be difficult to achieve if you get different testers for every exploratory test run.
Here are a few other ways Rainforest QA provides consistent, reliable results for all crowdsourced manual testing:
Unlike most crowdtesting services that make you sign a contract or a subscription fee, you only pay for what you use with Rainforest’s crowdsourcing. Plus, there are no long-term contracts. At a rate of $25/hr (after a 14-day free trial), you can run as many or as few tests as you need and only pay for what you use.
Applause offers a fully managed manual testing service that focuses heavily on exploratory testing. The company also offers scripted testing for any digital experience, including desktop and mobile applications. They provide an onboarding experience to integrate a team of Applause testers into your workflow before any testing starts.
The team is headed by a project manager who is responsible for managing your testing process. Although you have the option to author tests yourself, all test cycles have to be activated by the project manager. Then, results get returned in 1-3 days.
All testing is done ‘in the wild’, so you’ll only have access to results from the personal devices and browsers the testers are using. Testers are paid per bug, which means they’ll be incentivized to find more bugs. That can be great for exploratory testing, but for scripted testing they may be tempted to report irrelevant bugs (and reviews mention this can be an issue).
In addition to having to sort through lots of potentially irrelevant bugs, you’ll also have a harder time recreating each bug because you may not have access to the exact device and configuration it was found on. Knowing a bug exists is only helpful if you can recreate it because it’s nearly impossible to fix the bug otherwise.
Although there is no pricing information on the Applause website, reviews suggest that you have to sign a contract that is based on a set amount of testing hours per quarter.
Testlio offers the option for a managed or co-managed manual software testing service for web and mobile apps. If you choose the co-managed service, you’ll be given the option to create and run some of your own tests. Either way, a few people from Testlio sit in on your team meetings until they understand your process enough to create and run tests for you.
Once onboarding is complete and you’re ready for testing, an average testing session for Testlio is 1-4 hours. They also advertise ‘overnight’ functional testing. Additionally, reviews suggest that you may need to submit your testing suite one or two days in advance if you want to run a full suite of tests (such as your regression suite).
Like Applause, Testlio only offers ‘in the wild’ testing. Instead of paying testers per bug, Testlio pays testers by the hour. They give you different testers for every test run. This helps prevent any ‘nose blindness’ in regression testing, but reviews suggest that exploratory testing can be less effective because the testers are unfamiliar with the intricacies of your application.
Testlio offers no pricing information on their website.
Test IO advertises their ability to make use of non-business hours for testing by offering overnight and weekend testing. They start with a less-than-24-hour onboarding process to help you learn the platform and design your test cases, but the Test IO staff quickly move to a hands-off approach. For an extra fee, they will help you with long term test management.
They offer three pricing packages: starter, professional, and elite. The Starter plan doesn’t specify how quickly you will get results, but the Professional plan promises results within four hours and the Elite plan offers results within two hours.
Different testers will be assigned to your tests for every run, unless you opt for the Elite plan. In that case, you’ll be able to mark specific testers as ‘favorites’ and send the tests directly to them.
All testing is executed ‘in the wild’ and testers are paid per bug. In an effort to only report relevant bugs, Test IO tracks how many of the bugs testers report as compared to how many your team accepts as valid. Over time their goal is to decrease the number of bugs your team rejects. If you choose to add on the managed service, the Team Lead assigned to you will preview all reported bugs and reject any they think are irrelevant on your behalf.
Although Test IO defines the features included with each pricing package, they offer no other information on their pricing structure.
With Rainforest QA, QA teams can scale up their manual testing for higher app quality without adding headcount. Our on-demand testing teams provide the fastest test results of any of the services available today—17 minutes on average. And when you’re ready for automation, it’s easy to convert your crowd test scripts into automated test scripts.
It’s a scalable, all-in-one test automation solution that’s appropriate for small teams just getting started with automated testing or QA-mature teams regularly running 500+ software tests.
Manual software testing services help teams outsource the repetitive aspects of QA. See how 11 services compare.
Crowdsourced testing is an emergent method of QA testing which leverages a dispersed, temporary workforce to test software applications quickly and effectively.
Exploratory testing is an unscripted QA testing technique used to discover unknown issues, write test scripts and learn about usability during and after the software development process.
Rainforest QA has learnt many hard-won lessons about how best to use crowdsourced work whilst building its QA platform, and we’d like to share some of the highs and lows that we discovered.