We’ll be rolling out some new features at Rainforest that will focus on how the Rainforest testers interact with our platform. Today, let’s look under the hood of the Rainforest UI and take a deeper look at these upcoming tester features, and explore how they’ll help our customers get better test results.
The first big change will be a major overhaul to our tester training program. We’ve invested a ton of time analyzing different test scenarios and behaviors, identifying where testers would most likely get stuck. From that exercise, we learned a lot about what our users and our testers need to know in order to execute tests successfully. Based on this insight, we’re developing a completely new tester education strategy to round out our testers’ expertise.
We’ll also be incorporating new technologies into our training programs to continuously hone our testers’ skills. The more they test, the better they get. Practice test scenarios leave testers better prepared to navigate complex workflows in Rainforest tests.
To complement our training, our overhaul will also include improvements to the way we harness test data; identifying and assigning training tasks to testers based on problematic scenarios will help testers breeze through these challenges.
Reinforcing the training, we’re updating our tester rules to reflect the new focus of our tester performance. These updated tester rules should lead to less confusion over test execution, better preparedness for tricky test cases, and faster, more reliable test results for our users.
To learn more details about the tester rules, talk to your CSM!
Testers will be able to leave suggestions on minor improvements for step of a test case. Most users know that previously, testers could leave comments on the test as a whole, which was useful, but less immediately actionable.
With the new tester comments model, users will be able to receive more granular, specific feedback on an individual step in their tests. For example, if a step is confusing or could cause misunderstandings, testers will be able to point users directly to the step that is causing the most confusion. This will help users refactor their tests to get the most reliable results.
The change will also enable testers to alert users to any observations that they make beyond what’s explicitly asked in the test itself. This subjective feedback gives users extra information about app health and can provide a means for asking for additional information without influencing the deterministic “pass” or “fail” condition of the test results.
These new step-specific tester comments will be shown in the Detailed Results modal of the Rainforest results.
Almost the reverse of step-level tester comments, users will be able to rate the performance and provide feedback on an individual tester at the individual step level. This will allow users to report directly on what they like and dislike about the way a test was executed. These rates will contribute to the tester’s overall quality rating as one of several factors we use to evaluate tester performance and reliability.
By adding these ratings to the step level of tester activity, we’ll be able to gain better understanding over where testers are struggling, providing targeted tester training and performance improvement techniques in the future.
Tester star ratings will be shown in the Detailed Results modal of the Rainforest platform.
If you have questions about our upcoming new features, or want to learn more about the Rainforest testers, please let us know!