Our goal has always been to fix QA. With today’s release, we’re closer than anyone else to doing it. Tests in Rainforest now fix themselves, creating more reliable results while allowing your team to focus on what matters — shipping code.

Everyone has to do QA, but everyone hates doing QA. That’s why we started Rainforest in 2012. We make tools that make QA suck less. But until today, the core suck has remained: the time-consuming pain of fixing breaking tests as your software changes, and of figuring out whether a failing test is a real bug or just the test being dumb.

Rainforest self-healing is, unlike a few others in our industry, actually self-healing. Meaning that when the test breaks, Rainforest AI tries to fix the test by recreating the relevant parts of it. This is a fully generative AI end-to-end process. You prompt the AI what to test, and it creates the test in real-time in front of you. If the test looks good, then you put it into your suite and go merrily on with your life.

When the test breaks because your software inevitably changes, AI fixes the test and gets it to a passing state again. We’ll notify you, but the test won’t block your release and you can get back to building.

If a legitimate, flow-breaking bug happens — and in about 10% of test failures they do — the test will fail and we’ll block your release and ping your developers.

https://images.rapidload-cdn.io/spai/ret_img,q_lossy,to_avif/https://www.rainforestqa.com/blog/wp-content/plugins/unusedcss/assets/images/yt-placeholder.svg
Demo of self-healing with generative AI in Rainforest

There are a few things that make this hard to do.

The first is the infrastructure to support it. You want testing that replicates your actual user experience — not headless browsers, not artificial test environments that don’t align to production. We solve this by running a real browser in a full OS within a VM, that is shared across your test creation, editing, and running.

The second is that AI models aren’t optimized for QA. We solve this by doing Results Augmented Generation on our model with over 9 million sets of test instructions and the associated execution data, to tailor responses to the AI use case. We’re in a unique position to do this because we have, for about a decade, run the largest QA-specific crowdsourcing business. We know what “click the signup button” means, statistically.

The third and probably most important is that balancing between user control and AI magic is key. Too much AI magic and your AI creates all kinds of low-signal tests that your devs end up ignoring. Too much user control and it feels slow and painful. We already have a successful no-code automated testing tool. Users find creating tests easy and intuitive, and they like the level of control it gives them. They just hate fixing broken tests.

So now broken tests fix themselves.

Where’s this going next? We’re starting with “blocks” of generative AI within tests. Next, we’re heading to AI generative tests, and from there we’re heading to entire test suites generated by AI.

The whole team is really excited about this. It’s taken 12 years but we think we’re finally here: QA that doesn’t suck.

Get in touch and we’ll show you what these new features look like in action for your app.