Manually testing a website can be as simple as opening a web page or application in a staging environment, manually clicking through a list of test cases, and marking whether they passed or failed on a spreadsheet. And in fact, that's where most software teams start when they first invest in QA.
If you're a QA team of one working on a relatively simple application, it may be feasible to manage your testing with spreadsheets alone, but as soon as you start needing to test more than 50 or so test cases, you’ll probably need to bring in other team members to get testing done fast enough. And that’s when spreadsheets start to break down—often because of small miscommunications and inconsistencies between testers.
In this manual testing guide, we'll share best practices that can help make manual testing more efficient, scalable, and less prone to human error, even if you’re still managing your testing program with a spreadsheet. We'll also share how our testing platform, Rainforest QA, can help you scale up your manual testing without adding headcount—and help you transition to automated testing when you're ready.
Five Best Practices for Manual Testing
Scale Up Manual Testing with Rainforest QA
Want to expand your manual testing without adding headcount? Talk to us about our Premium plan.
The first two best practices in this list are useful for software testing in general regardless of how you perform testing (i.e., manually or with automation) or the type of testing you’re performing (e.g., functional testing vs. usability testing). The last three best practices are testing techniques geared toward manual functional testing (although each one has elements that can be applied to automated testing as well).
A testing strategy will help you define your goals, decide what to include in your testing efforts, assign responsibilities, and plan for what you’re going to need long-term in regards to time and resources.
We’ve written an in-depth, practical guide to automated testing strategy. Although that article was written for teams using automated testing, many of the same concepts still apply to manual testing. Two other considerations not covered by that article are:
As your software product grows and matures, you’ll need to add several more tests to your testing suite to cover new features. This means you’ll eventually need more people to help cover manual testing. However, it can be costly to hire more internal team members, so many teams end up outsourcing some of their manual testing. You can learn more about manual software testing services here, or skip ahead to see how Rainforest QA provides the fastest manual testing available via our worldwide community of QA specialists.
Outsourcing some of your manual testing is a great way to keep up with manual testing in the early days of building your software. But, once you have a few stable builds covered by test cases that get run frequently, making the switch to automated testing can save you a lot of time and resources.
With most automated software testing tools (e.g., Selenium), you’ll have to re-write all of your test scripts in whatever programming language the tool requires. This is typically very time-consuming and costly. It also means that only team members with programming experience will be able to manage the automated tests.
Rainforest QA takes a new approach to software testing that makes it possible to write a test script for manual testing that can later be used for automated testing with minimal updates. And, you don’t need programming skills to write, edit, and otherwise manage the automated testing.
While a testing strategy takes a high-level approach that helps you direct your testing efforts overall, a test plan digs into how individual tests will be performed. This includes defining individual test cases, determining success criteria, preparing test data, choosing which types of testing to use, and more. We’ve written a detailed tutorial on how to write a test plan, but one of the most important things to consider is which tests will eventually need to be automated.
Not all test cases or types of testing are suited for automation. For example, it’s better to use manual testing for user paths that:
Some of the types of testing that are better suited for manual testing include:
On the other hand, most test cases and types of testing can and should be automated because automation is much cheaper and quicker to run. Any stable user path that doesn’t fall under the categories mentioned above would ideally be automated. Some examples of the types of testing that typically get automated are the following:
Some types of testing, such as exploratory testing, rely heavily on the tester’s ability to be creative in finding new user paths. Other testing types, such as regression testing, need to be run exactly the same way every time.
With manual testing, there is typically a lot of room for test steps to be interpreted in different ways. For example, let’s say you’re testing the Airbnb web app and have a test step that says “Search for stays in Paris.” These instructions sound clear enough at first, but they could be executed in a few different ways. For example:
Each one of these examples could return a different result.
Instead, we recommend using specific and direct instructions for each test step. For the example above, a more clear test step would be: “Type ‘Paris, France’ into the search bar and click “Search.”
This leaves less room for variation in the execution of the test step.
We also recommend avoiding the use of any internal company jargon and technical phrases. For example, instead of writing “Verify the sign-in modal appeared,” you could write, “Did a popup appear prompting you to sign in?”
The idea here is to make your test scripts easy enough for anyone to follow, and to be fairly certain that each person will complete each test step in the exact same way. This makes it easier for anyone to help with testing and will provide more reliable results.
If you have too many instructions in one test step, it can be difficult to pinpoint which part of the test step actually failed. For example, if a failed test step includes instructions to ‘Navigate to the login screen, enter the test user credentials, and log in’, you won’t know if the tester wasn’t able to navigate to the login screen, or if they weren’t able to type in the text boxes, or any number of other issues.
That’s why we recommend limiting each test step to just one action. We also recommend breaking each step down into two parts:
Going back to our example of searching for a stay in Paris on the Airbnb website, the test steps would be:
If either one of those steps failed, you would have a much better idea of where to look for the bug.
Another best practice with both manual and automated testing is to keep tests short and focused.
When you’re working on a team of two or more people, it will take longer to complete a few long tests as compared to dozens of shorter tests.
For example, let’s say you have five features that you want to check on, and it would take about 10 minutes worth of clicking around the site to verify everything is working. You could logically put together a test that explored all five features one after another, but there are two problems with this:
First, if the test fails, it’s not immediately obvious to a person looking at the test result which feature is responsible for the failure. And second, it slows down testing when you’re working with a team. If you put all of the test steps into one test, it would take one person 10 minutes to complete it. If you separate the test into five two-minute tests, then two testers could complete the testing in five or six minutes.
In general, we’ve found that if you go over 20 unique steps (not including reusable steps like those that log in to your application), you’re probably trying to test too many things with one test. It will likely save you time in the long run to separate it out into multiple shorter tests.
When you get to a point where you need to run more tests than your development team can handle without hiring more people, Rainforest’s crowd-testing service can help you expand your test coverage. Working with Rainforest can also mitigate some of the biggest challenges of manual testing with spreadsheets, such as having to manually recreate test failures to understand what went wrong.
Rainforest QA provides the fastest manual testing in the industry: Results come back in less than 20 minutes, on average and you’re only charged for the time the crowd-testers spend running your tests, not the time you spend using the app to write test steps or evaluate results.
Rainforest QA offers two different test editors, which can both be used to create manual test scripts. The Visual Editor lets you choose from a library of standard actions and take screenshots of elements to write test scripts. These test scripts can be used for manual testing or automated testing. The Plain-Text Editor allows you to write test steps in plain English that can only be used for manual testing.
Note: With either option, you’ll have access to the Rainforest library of built-in, randomized data for login credentials, payment details, and contact information. This lets you run test cases with unique data without having to create all of the test data yourself.
To write or edit a test step using the Visual Editor, choose from the dropdown menu of actions and take a screenshot of the element you want to apply the action to. This screenshot is used in automated tests to find and interact with elements on the user interface and for visual validation of elements during test runs.
You can also add additional step details such as the exact text that should be entered into a text box.
Whenever you want to run a test created with the Visual Editor, you can utilize the Rainforest Tester Community for manual testing or choose our Automation Service for automated user interface testing.
When you send a test to the Rainforest tester community, each test is translated into simple text to ensure our experts will be able to execute each step accurately.
If you decide to automate your tests, you’ll get five hours of free testing every month. Then, it’s only $5/hour for the duration of the test run.
This two minute video demonstrates how to write a simple test using the Rainforest Visual Editor.
The Plain-Text Editor lets you write test steps in free-form English and offers features to help mitigate miscommunication.
For example, each test step is automatically set up with two parts: an action and a verification.
As we covered earlier, this setup helps clarify exactly what the tester should do and the information they should report on.
Two other ways to help ensure each test is executed as intended are to:
1. Use the click-to-copy feature for exact inputs that need to be entered
2. Embed images and text files to clarify elements or features that are difficult to describe
This video covers how to write a test using the Rainforest Plain-Text Editor. And, you can learn more best practices for writing test scripts using the Plain-Text Editor in this article.
Rainforest’s team of testing experts are distributed across the globe, which allows us to offer testing 24/7, every day of the year.
You can kick off a manual (or automated) test run from within the Rainforest platform or via your CI/CD pipeline using our CLI, API or CircleCI Orb, and GitHub Actions integrations. Tests will be run simultaneously, which allows you to receive manual test results in less than 20 minutes, on average.
Each test is run by at least two QA specialists. If these two testers don’t agree on the result of the test, the test will be sent to more testers until an agreement is reached. Additionally, Rainforest QA specialists are trained to follow instructions closely and reach out if there’s any confusion. This ensures that they’re testing what you intended them to test and makes results more reliable.
This allows your team to focus on other tasks, so you’ll never have to delay starting a test run, and you can take advantage of evenings and weekends for testing.
Manual testing is often done on the tester's personal device, meaning each tester may be working with a different device that has a different browser, operating system, web server and browser settings (e.g., ad blockers vs. no ad blockers). While real users will also be using different devices, browsers, operating systems, etc., taking this approach with your testing has one major downside: different devices and configurations introduce the possibility for inconsistent results.
Each variation in device, browser, or operating system can affect how the application responds, which means a test may pass on one device but fail on another device.
While it may be tempting to try to fix every error for every operating system, web browser configuration, and device, this isn’t feasible for most teams. Instead, you’ll want to focus on ensuring your application works under normal circumstances (i.e., the most frequently used devices, operating systems, setups, etc.).
In order to get consistent, reliable results every time, tests need to be run in the same environment, free from any unpredictable outside factors, such as ad blockers, browser security settings, or outdated operating systems.
Instead of using physical devices, Rainforest testers use virtual machines. This means each test run starts with a clean, consistent testing environment. Via the virtual machines, all Rainforest testers have access to the four major browsers including the latest and older versions of Safari, Edge, Chrome, Firefox, and Internet Explorer. They also have access to the latest and older operating systems for various desktop and mobile devices including Windows, macOS, iOS (on iPhone and iPad), and Android (on phones and tablets).
One of the biggest challenges with manual web application testing is being able to reproduce bugs in order to find and fix them.
Rainforest QA helps solve this problem by recording every test run, regardless of whether it passes or fails. If it takes three testers to arrive at a passing result, you’ll receive all three recordings. Test results also include HTTP logs, browser logs, mouse activity, time spent on each step, and more. This means you’ll be able to see exactly how each test step played out without having to re-create the exact situation.
Our testers will leave explanations for how a result was determined and any additional notes to help explain how a test played out. Additionally, all Rainforest testers will evaluate the software from the end-user’s perspective and note any unexpected or undesirable aspects of the application they encounter. These might not be bugs, per se, but could be things affecting the user experience that you’ll want to address. However, they will only mark a test as ‘failed’ if they were unable to complete or verify a step included in the test.
Once you’ve reviewed the test results and determined the cause of the failure, you can easily categorize the failures with custom labels. This is helpful for prioritizing your efforts when you’re short on time.
This video provides a more detailed example of how to triage test results from the Rainforest tester community.
With Rainforest QA, software teams can quickly scale up their manual testing without adding headcount or creating a bottleneck in the software development life cycle. Our on-demand testing teams provide the fastest test results of any of the services available today, typically returning results less than 20 minutes after the test gets submitted. And our reliability systems make sure you can trust your results.
Talk to us about a Premium plan featuring manual testing services.
Crowdsourced testing provides a more flexible, cost-effective means of increasing testing bandwidth and improving product quality for companies like ConsumerAffairs.
Rainforest QA has learnt many hard-won lessons about how best to use crowdsourced work whilst building its QA platform, and we’d like to share some of the highs and lows that we discovered.
In this post, we provide an in-depth comparison of 9 Applause competitors and guidance for choosing the best fit for your team.
Manual software testing services help teams outsource the repetitive aspects of QA. See how 11 services compare.