
When we first published this report last year, AI in software testing was still an experiment for most teams. Since then, well… everything has changed.
In 2025, AI hasn’t been a promise of what’s to come — it’s become an everyday reality. And while not every team has figured out how to make AI work for them, we’re now seeing clear momentum toward outcomes that once felt out of reach — especially for smaller teams:
- Faster release cycles
- Fewer testing bottlenecks
- Less manual maintenance.
That’s why we’re re-releasing our 2024 research — with fresh commentary and data that reflect how far the industry has come in just twelve months. We hope this helps teams plan for QA in 2026 and beyond.
AI in software testing tools adoption is high — but value took a while to catch up
Our survey of more than 600 software developers across the U.S., Canada, the U.K., and Australia found that three-quarters of teams using traditional, code-based test automation frameworks had adopted AI testing tools to assist with test writing and maintenance.
That’s a remarkable rate of adoption — but at the time, the results told a complicated story:
Teams using AI weren’t yet saving time on those tedious tasks. In some cases, they were spending slightly more time maintaining test suites than teams that hadn’t yet brought AI into the mix.
“To be able to maintain automated tests, especially with a small dev team, just takes time.” – Software Engineering Lead
“In the past, we gave up on testing the front end because it was too difficult to maintain and the tests were very often broken.” – Engineering Manager
For many teams, early versions of AI testing tools added an unnecessary layer of complexity. The technology was promising — but still too immature to offset the pain of brittle test suites and manual updates.
Why AI-powered test automation fell flat — and what’s changed since then
At the time, many QA for AI features were surface-level: autocomplete tools, code generators, or point solutions layered onto existing testing frameworks. Those tools didn’t fundamentally change how tests were created or maintained — so they couldn’t fully deliver on AI’s productivity promise.
But the landscape has evolved.
Over the past year, AI in software testing has shifted from code assistance to intelligent automation — tools that understand your app’s structure, can crawl it autonomously, and update tests as the UI evolves.
The result: the same teams that once saw flat returns from AI are now beginning to see real efficiency gains when they adopt platforms purpose-built for this next generation of testing.
We’re betting you’re seeing this shift too… and not just in the world of QA. For example, Anthropic recently posted about AI’s maturation in cybersecurity. Everywhere we look, AI is rapidly moving from “interesting in theory” to practical, powerful real-world applications.
Want to see for yourself how far AI in QA has come?
The signal for smaller, faster teams
One of the clearest findings from our original dataset was this: Small teams using AI were more likely to keep their automated tests up to date.
We can see now that this was a hint of what was coming.
AI wasn’t eliminating work yet — but it was starting to make modern testing possible for leaner teams without the headcount or bandwidth to maintain sprawling frameworks.
Fast forward to 2025, and QA test automation trends have accelerated.
With more powerful, purpose-built AI testing tools, even startups with small dev teams can maintain full coverage without the heavy lift of traditional test writing and maintenance. It’s no longer necessary to have a highly technical QE team at most startups. No-code, natural language test writing powered by AI has changed the game. And that’s led to a major shift in who gets to have reliable, automated QA. (Hint: Anyone who wants it!)
AI in software testing: From promise to practice
So, was AI underdelivering last year? Not exactly, if you ask us.
AI in software testing was still finding its footing — and teams were still figuring out how to integrate it effectively. (Just as teams across so many other functions have been doing.) The AI for QA data from 2024 captured that growing pain.
Today, we’re seeing those early adopters’ patience pay off.
Teams adopting fully AI-driven SaaS testing platforms are gaining the benefits everyone hoped for when surveyed last year:
- Faster test creation and maintenance cycles
- Fewer bottlenecks between dev and QA
- Increased confidence in release velocity
The takeaway: AI-driven QA is here to stay
AI isn’t replacing testers or developers — it’s decreasing friction. And for many scrappy teams (think: pre-QA hires), it’s allowing them to execute QA at a level that was previously unattainable. It’s letting people get back to more interesting, strategic work while AI takes care of tedious, manual QA tasks.
The story this data tells isn’t one of hype versus reality; it’s one of evolution. The pain points highlighted in 2024 were real — but they also pushed the industry toward smarter solutions that deploy AI where it actually matters. AI in software testing is one of those places.
In 2025, those solutions are here.
And teams embracing them are finally seeing what AI can really do for QA:
✅ Map out key coverage areas (no more guesswork)
✅ Ensure reliable test coverage
✅ Free up developer time
✅ Help small teams ship with the confidence of much larger ones.
Read the full report: The State of Software Test Automation in the Age of AI
Re-released with new insights on how teams are turning early experimentation into real-world results and how AI in software testing is evolving in real time.
