In early 2016, Signagelive—a digital signage company—had an informal approach to QA. When developers had time, they performed a few manual tests ahead of each software release and hoped for the best. This allowed the company to focus on growth and building new features, but the company reached a point where the number of bugs clients found was unacceptable.
As a global leader in the digital signage industry with customers in 46 countries, Signagelive knew they needed to take a more rigorous approach to QA if they were going to hold their place in the industry.
Signagelive’s first step toward building a QA program was to pull Francisco José Arráez Segura from the customer support team to head up QA efforts. For a few years, Fran worked with a team of human testers doing manual testing of as much of the app as they could. Although some QA was better than no QA, manual testing proved to be too slow to achieve the coverage they wanted and to keep up with new software releases.
In 2021, Signagelive switched to no-code automated testing with Rainforest QA. Within four months, they had dramatically improved their QA program. Their results included:
As we share their story, we’ll talk about:
Sign up for Rainforest QA to save time and money on no-code test automation—you can run up to five hours of no-code automated tests for free, every month. It’s only $5/hour after that.
When Fran first took over QA for Signagelive, he worked with a team of QA testers doing manual software testing. This involved writing instructions for the testers to carry out manually, sending those instructions to testers, and waiting for them to report back on which tests passed and which tests failed. It often took a few days to get all the results back for their regression test suite.
After receiving the results, the most time-consuming part of the task began: categorizing the failures.
Ideally, each failure would represent an actual bug, and they often did. But not all bugs were equally critical, so Fran had to re-run each test to see what the bug was and decide on the priority level before sending it to developers.
Sometimes, Fran would re-run the tests and find no bugs. He attributed these inconsistencies to human error or misinterpretation of the test instructions. As Signagelive’s test suite grew, these kinds of errors happened more and more, slowing down testing and making Fran question the reliability of the test results.
In an effort to increase their test coverage, Signagelive added two more people to the QA Team. Even with more people, manual testing was too slow to achieve the level of coverage Signagelive wanted, at the speed they needed. New releases were going live before the team had time to complete regression testing, and customers were still finding bugs before the team did.
At the end of 2020, one of Signagelive’s clients requested a new feature called granular user permissions (GUP). GUP is a security feature that allows a network admin to control which users can or can’t see certain features and UI elements (e.g. a button to create a new digital signage playlist). To test this feature, you need to have complete confidence that you’re verifying what the user will actually see.
GUP wasn’t a good fit for manual testing, because there was too much opportunity for human error. The testers would need to read the instructions very carefully to make sure that the features that were visible matched the admin access they were testing—it wasn’t as simple as testing whether all of the buttons on a page worked or whether you could successfully fill out a form. With manual testing out of the question, Fran looked for an automated testing solution.
The most popular automation testing framework, Selenium, wasn’t a good fit for testing the GUP, because it tests the underlying code of an application (i.e. the DOM) instead of the visual layer.
Tests created with Selenium use locator IDs, or snippets of code, to identify elements on a page and test whether they appear as intended. To test a feature like the GUP, you might include a step to verify that an element’s visibility is set to “visible” for users who are supposed to have access, and “not visible” for other users.
However, this isn’t a reliable way to test whether the element is actually visible to the user. There could be a pop-up modal on top of the element, meaning a user can’t see it. Or there could be a bug in the element’s positioning logic that causes it to render somewhere off-screen.
Additionally, if the name of the locator ID changed, but the test wasn’t updated to reflect that change, the test might fail even if the feature was working properly.
Ultimately, if you’re testing the user interface of an application with a tool like Selenium, you’re testing what the computer thinks the user is seeing, but you’re not actually testing the visual rendering of the app. Almost all codeless test automation tools are built on top of Selenium and rely on these locator IDs.
Rainforest was the only automation service that offered an alternative way of identifying elements on the page, and it was a perfect match for Signagelive’s needs.
Rainforest creates tests using pixel-matching and text-matching—rather than code-based locators—to find and test elements in the UI of an application. This means you’re testing what the user will see rather than what the computer thinks is happening.
To create a test in Rainforest, you select a preset action (like “click”, “select”, or “type”) and then click-and-drag to take a screenshot of the element you want to apply the action to.
Instead of a code-based locator, Rainforest will use this small screenshot to find the element during the test. If those pixels exist anywhere on the page, the test will find them and be able confirm that the element is visible.
"Rainforest’s no-code automation language is perfect for testing the GUP," Fran said.
Rainforest QA’s automation tool is designed to let anyone create and maintain automated tests without any programming skills.
Fran and Lewis were able to get started immediately without hiring a QA engineer or learning a framework like Selenium. When they wanted to speed up test creation to meet an upcoming deadline, they were able to borrow members of other teams to help them. Those team members were able to get up to speed with only a short tutorial.
The QA team saw benefits almost immediately upon converting manual tests to automated tests. Test runs took much less time to complete, and Fran no longer had to worry about human error. If a test failed, it was either because of a bug, or because of a test step that needed to be updated, which was usually quick and easy.
“Now I’m confident if a test fails on step X, it’s because that feature or that button in the app has failed,” Fran said.
Fran decided to automate as much of their regression test suite as possible—with the exception of a few test cases that weren’t a good fit for automation. These included tests that benefit from human judgment, tests that have CAPTCHAs, or tests that need to be run on actual hardware devices instead of virtual machines.
With Rainforest, they were able to dramatically improve test coverage, creating more than 500 tests in four months. Fran notes that they probably could have created those tests even faster, perhaps in about two-and-a-half months, if the developers hadn’t been improving some major features in the app during that time.
Before Signagelive started using automated testing, it would take Fran and Lewis two or three weeks to complete a full regression test suite.
With Rainforest, running an entire suite of tests only takes as long as the longest test in the suite. The average automated test run on Rainforest takes less than four minutes.
Using automation, Signagelive can now complete their entire regression suite in half a day.
Evaluating test results with Rainforest’s automation was much faster for Fran and Lewis than evaluating manual test results because it was easier to categorize failures and report bugs to developers.
Thanks to video recordings of every test and other information provided by the platform, the QA team can tell in less than a minute whether the failure was caused by a real bug in the app or a problem with the test. They no longer have to manually re-run the tests to see if the tester’s results were valid.
If the test fails because of a problem with the automation, Fran says they can usually find the break and fix it in less than a minute. If there’s an actual bug, it usually takes less than five minutes to identify exactly where the bug is.
Tests that fail will have a red ‘X’ to the left of the test name and passed tests will have a green check mark.
To view the cause of the failure, you click on the test and the test step that failed will be highlighted in red (as seen below). When available, the platform also intelligently provides suggestions for fixing broken tests.
Each test result includes a video of the test, whether it passes or fails, which helps QA teams quickly identify the root cause of test failures.
Plus, Rainforest integrates seamlessly with Jira, so for any bug they find, Fran or Lewis can automatically create a ticket in Jira with a link to the test in Rainforest. Once the bug has been resolved, it only takes a few minutes to verify the fix.
“Yesterday I reported eleven bugs in four minutes. I managed to double check that they were resolved in two minutes. That’s impossible with manual testing,” Fran said.
Spending less time running tests and reviewing results freed up the QA team’s time to expand their test coverage. Fran now estimates that 90% of their app is covered with tests that run the exact same way each time, giving them complete confidence in the test results—and more importantly, the quality of the app itself.
Automated testing helped Signagelive get fast and consistent test results across their entire platform. They were able to eliminate human error, run all of their tests in parallel, and quickly categorize test results and identify bugs.
With automation, they’re now doing more testing in much less time, which frees up time for other aspects of QA. Fran is able to spend more time thinking critically about the overall user experience of the app and designing tests to evaluate how various features interact.
The rest of the QA team is able to devote more time to hardware testing, which is currently not automated, and was often one of the first QA activities to get cut when the team was pressed for time.
Ever since Signagelive started doing automated testing with Rainforest QA, the number of critical bugs that go through to production has decreased dramatically. Along with efforts from other areas of the company, QA has helped the company continue to grow. They’ve been adding employees and customers for six years in a row, and they don’t see it slowing down anytime soon.
“I’m more confident than ever that our platform’s quality is OK,” Fran said.
If manual software testing is slowing down your release pipeline, it could be time to start automating testing. With Rainforest QA, you can build an automated test suite without learning a programming language or hiring QA engineers. Anyone on your team can get started creating automated tests and begin improving quality.
It’s a fast and scalable all-in-one solution that’s appropriate for teams that are just dipping their toes into automated testing, as well as QA-mature teams running 500+ software tests on a regular basis.
Start automating testing with Rainforest QA’s free plan.
With this practical guide, understand the difference between manual vs automated testing and when to use each method.
Follow these 5 steps to create an automated testing strategy that answers the who, what, when, why, and how of software test automation.
Testing the visual layer of your app is more reliable than testing the underlying code, especially for automated UI testing.
The right codeless test automation tools make it easy for anyone to speed up their QA process without trading quality.