Crowdsourcing tasks is an alluring prospect. You can use the internet to connect to a large pool of individuals and recruit them to fulfill your tasks quickly and efficiently. This is the dream, but reality isn’t such smooth sailing.
Rainforest QA has learnt many hard-won lessons about how best to use crowdsourcing whilst building its QA platform, and we’d like to share some of the highs and lows that we discovered.
One of the key benefits of crowdsourcing is having a large number of workers available to carry out tasks on a piecework basis. This means that it is likely you can get the crowd to start work on your tasks almost immediately. It’s also likely that you will be able to have a large number of tasks completed quickly, providing a great way to handle bursts of activity without having to employ people full time to deal with the peak load. As a result, you can reduce the need to hire a team that either sits idle when they aren’t needed, or is taken off other activities when you have a spike of work.
Because crowd-sourcing platforms generally source workers from all over the world, they also provide good 24x7 coverage of workers. You can provide services even when you are outside of your normal working hours locally -- an increasingly important requirement when more and more organizations provide services online to national and international customers, who may be many time zones away from you.
In short, the crowd is on-demand, scalable, and always-on. As a result, it can be easy to assume that leveraging crowdsourced work is no different than adding a new software tool to your workflow. But while there are clear advantages to using crowdsourced work, it isn’t a perfect solution, and there are significant problems that you have to take account of before using crowdsourcing.
In our experience, the greatest problem with using the crowd is getting the correct results for your task. Simple tasks that can be completed quickly are likely to be completed correctly. But more complicated tasks, which require greater reading comprehension, more analytical skills, and more effort — in short, tasks that are harder -- will have a greater risk of being done wrong.
At Rainforest, we tackle this by standardizing how our customers’ QA tests are presented to the workers. This reduces the cognitive load workers are under when they are testing sites and letting them concentrate on the tests. We also train them with frequent refresher courses on the behavior we expect. Making communication between the customers and the workers clearer and the workers responses more consistent and reliable when carrying out tests ensures better quality results as both sides learn to speak the same language.
Even well-trained workers are only human and can make mistakes. This is problematic with crowdsourced tasks, because generally the primary reason for using the crowd is that the task you want done is hard to automate, but easy for a human to complete. This means it’s difficult to tell the difference between right and wrong answers automatically. So how do we give our customers quality test results? We give each test to several workers, and use algorithms that have been validated over hundreds of thousands of tests to determine whether the overall result is correct or not. These algorithms and the data they rely on is being constantly refined and analyzed by our data science team to improve results quality.
Another source of errors in results are from workers who are engaged in fraud and have no intention of attempting to complete your tasks in good faith. Again we use the large data set of workers completing tasks in our environment to determine which workers are honest and which are attempting to defraud us.
Finally, it’s important to remember that although workers from the crowd can appear robotic and dehumanized by the internet, they are people with desires and needs. Most are genuinely trying to deliver the best work they can. That’s why we spend significant amount of time listening to our workers, as well as our customers, to discover what we can do to improve in helping them to succeed.
At Rainforest, we continually refine how we use crowdsourced work in order to get the best results possible, from using machine learning to rapidly validate test results to creating a tester community to help support our crowdworkers. All of these efforts come together when we deliver fast, reliable QA results for customer’s sites.
Manual software testing services help teams outsource the repetitive aspects of QA. See how 11 services compare.
In this post, we provide an in-depth comparison of 9 Applause competitors and guidance for choosing the best fit for your team.
While it’s true that manual testing can be a bottleneck when it comes to fast-moving development, that doesn’t mean you should abandon it entirely.
Understand the differences between four popular manual testing services: Rainforest QA, Applause, Testlio, & Test.io.