Signagelive is a global leader in the digital signage industry powering tens of thousands of screens deployed across 46 countries, with a cloud-based platform available in multiple languages.
"Yesterday I reported eleven bugs in four minutes. I managed to double check that they were resolved in two minutes. That’s impossible with manual testing."
In early 2016, Signagelive—a digital signage company—had an informal approach to QA. When developers had time, they performed a few manual tests ahead of each software release and hoped for the best. This allowed the company to focus on growth and building new features, but the company reached a point where the number of bugs clients found was unacceptable.
As a global leader in the digital signage industry with customers in 46 countries, Signagelive knew they needed to take a more rigorous approach to QA if they were going to hold their place in the industry.
Signagelive’s first step toward building a QA program was to pull Francisco José Arráez Segura from the customer support team to head up QA efforts. When Fran first took over QA for Signagelive, he worked with a team of QA testers doing manual software testing. This involved writing instructions for the testers to carry out manually, sending those instructions to testers, and waiting for them to report back on which tests passed and which tests failed. It often took a few days to get all the results back for their regression test suite.
After receiving the results, the most time-consuming part of the task began: categorizing the failures.
Ideally, each failure would represent an actual bug, and they often did. But not all bugs were equally important, so Fran had to re-run each test to see what the bug was and decide on the priority level before sending it to developers.
Sometimes, Fran would re-run the tests and find no bugs. He attributed these inconsistencies to human error or misinterpretation of the test instructions. As Signagelive’s test suite grew, these kinds of errors happened more and more, slowing down testing and making Fran question the reliability of the test results.
In an effort to increase their test coverage, Signagelive added two more people to the QA Team. Even with more people, manual testing was too slow to achieve the level of coverage Signagelive wanted, at the speed they needed. New releases were going live before the team had time to complete regression testing, and customers were still finding bugs before the team did.
At the end of 2020, one of Signagelive’s clients requested a new feature called granular user permissions (GUP). GUP is a security feature that allows a network admin to control which users can or can’t see certain features and UI elements (e.g. a button to create a new digital signage playlist). To test this feature, you need to have complete confidence that you’re verifying what the user will actually see.
GUP wasn’t a good fit for manual testing, because there was too much opportunity for human error. The testers would need to read the instructions very carefully to make sure that the features that were visible matched the admin access they were testing—it wasn’t as simple as testing whether all of the buttons on a page worked or whether you could successfully fill out a form. With manual testing out of the question, Fran looked for an automated testing solution.
The most popular automation testing framework, Selenium, wasn’t a good fit for testing the GUP, because it tests the underlying code of an application (i.e. the DOM) instead of the visual layer.
Tests created with Selenium use locator IDs, or snippets of code, to identify elements on a page and test whether they appear as intended. However, this isn’t a reliable way to test whether the element is actually visible to the user. There could be a pop-up modal on top of the element, meaning a user can’t see it. Or there could be a bug in the element’s positioning logic that causes it to render somewhere off-screen.
Ultimately, if you’re testing the user interface of an application with a tool like Selenium, you’re testing what the computer thinks the user is seeing, but you’re not actually testing the visual rendering of the app. Almost all codeless test automation tools are built on top of Selenium and rely on these locator IDs.
Rainforest was the only automation service that offered a way of visually identifying elements on the page, and it was a perfect match for Signagelive’s needs.
Rainforest automated testing helped Signagelive get fast and consistent test results across their entire platform. They were able to eliminate human error, run all of their tests in parallel, and quickly categorize test results and identify bugs.
Rainforest QA’s automation tool is designed to let anyone create and maintain automated tests without any programming skills.
Fran and Lewis were able to get started immediately without hiring a QA engineer or learning a framework like Selenium. When they wanted to speed up test creation to meet an upcoming deadline, they were able to borrow members of other teams to help them. Those team members were able to get up to speed with only a short tutorial.
Fran decided to automate as much of their regression test suite as possible. The team saw benefits almost immediately upon converting manual tests to automated tests. Test runs took much less time to complete, and Fran no longer had to worry about human error. If a test failed, it was either because of a bug, or because of a test step that needed to be updated, which was usually quick and easy.
Before Signagelive started using automated testing, it would take Fran and Lewis two or three weeks to complete a full regression test suite. With Rainforest, running an entire suite of tests only takes as long as the longest test in the suite. The average automated test run on Rainforest takes less than four minutes.
Using automation, Signagelive can now complete their entire regression suite of 500 tests in half a day.
Evaluating test results with Rainforest’s automation was much faster for Fran and Lewis than evaluating manual test results because it was easier to categorize failures and report bugs to developers.
Thanks to video recordings of every test and other information provided by the platform, the QA team can tell in less than a minute whether the failure was caused by a real bug in the app or a problem with the test. They no longer have to manually re-run the tests to see if the tester’s results were valid.
"Now I’m confident if a test fails on step X, it’s because that feature or that button in the app has failed."
If the test fails because of a problem with the automation, Fran says they can usually find the break and fix it in less than a minute. If there’s an actual bug, it usually takes less than five minutes to identify exactly where the bug is.
Plus, Rainforest integrates seamlessly with Jira, so for any bug they find, Fran or Lewis can automatically create a ticket in Jira with a link to the test in Rainforest. Once the bug has been resolved, it only takes a few minutes to verify the fix.
With automation, they’re now doing more testing in much less time, which frees up time for other aspects of QA. Fran is able to spend more time thinking critically about the overall user experience of the app and designing tests to evaluate how various features interact.
The rest of the QA team is able to devote more time to hardware testing, which is currently not automated, and was often one of the first QA activities to get cut when the team was pressed for time.
Ever since Signagelive started doing automated testing with Rainforest QA, the number of critical bugs that go through to production has decreased dramatically. Along with efforts from other areas of the company, QA has helped the company continue to grow. They’ve been adding employees and customers for six years in a row, and they don’t see it slowing down anytime soon.
“I’m more confident than ever that our platform’s quality is OK."