The mabl blog: Testing in DevOps

Introducing Active Coverage: Quality That Keeps Pace with Agentic Development

Written by Fernando Mattos | Apr 23, 2026 1:00:05 PM

Today, mabl launches Active Coverage, and with it, the most significant evolution of our platform since we started building it in 2017.

Active Coverage is coverage that builds itself, runs itself, and fixes itself. It’s mabl's answer to the question every engineering and quality leader is now asking: how do you keep your test suite working when the speed of development just crossed a threshold it is not coming back from?

The Gap That Opened Up

If your team has adopted AI coding tools in the last year, you already know what we're describing. Code ships faster, PRs open continuously, and somewhere along the way, your test automation started falling behind. This is a structural mismatch. Test automation was built for a world where humans wrote code at human speed, and the testing infrastructure most teams rely on today was never designed for the pace at which AI coding agents operate.

We know this from the inside. mabl adopted agentic development internally. We're building mabl using the same tools and workflows as our customers, and we hit the exact consequences they're now facing. PR output tripled, suites struggled to keep up, and failures compounded faster than anyone could manually triage. Our CTO wrote about what that experience looked like in practice. Active Coverage is what we built to solve it.

What Active Coverage means in practice

Active Coverage is what you get when mabl's skills work together continuously. Test authoring, failure analysis, test recovery, and test execution run as a single loop without human handoffs between them. The AI owns the operational work while teams stay in control of what matters: what to protect, how the application should behave, and what a real failure looks like.

Most platforms claiming "agentic testing" today are asking teams to manage multiple agents and orchestrate the handoffs themselves, or handing test creation to the same coding agent that wrote the code being tested. Neither approach tests infrastructure, and neither scales with agentic development.

Here is what each capability does and what changes when teams use it.

Agent Instructions

Every capability mabl ships is only as good as the context it operates with. A platform that does not understand your application (what matters to users, how edge cases should be handled, what conventions your team follows, etc.) produces tests you cannot fully trust.

Agent Instructions lets teams configure their quality standards once and have mabl apply them automatically across every test it authors, every failure it analyzes, and every recovery it attempts. Two layers work together: Application Summaries, where mabl automatically builds context from real test activity, and Agent Instructions, where teams explicitly encode how to test the application. This is the quality-specific context a coding agent does not have. A coding agent operates in the context of the code it just wrote, while mabl operates in the context of what your users expect.

Cloud Test Generation

Test creation has always been a bottleneck: one machine, one session, someone waiting. That model breaks when AI coding agents are shipping code continuously and coverage needs to keep pace.

With Cloud Test Generation, tests are authored entirely in the cloud with no desktop app or local setup required. Sessions can be triggered from a browser, the CLI, an IDE via MCP, or directly from Jira via the Atlassian Rovo integration. Anyone on the team can kick off a session from wherever they work and come back to completed tests. There is no practical limit on how many sessions a team can run; mabl builds in batches and queues the rest automatically. A full session log shows every decision mabl made during creation, so teams understand what was built and why.

Test creation quality is something the mabl team has invested in continuously. Earlier this year, the team rebuilt the test creation agent from the ground up — our lead engineer wrote about that process here. The April release builds on that foundation: the agent now replays tests after generation to verify they pass before saving and self-corrects along the way. Like any AI system, the agent gets stronger with more context — Agent Instructions, Application Summaries, and existing test assets all contribute — but the gap between a fresh setup and production-ready tests is smaller than it has ever been.

Runtime Recovery

Not every test failure is a real failure. Unexpected modals, stale filters, unresponsive buttons…these environmental conditions stop tests and produce noise that someone has to investigate before finding out what actually broke and why.

Runtime Recovery handles these obstacles automatically during execution. If mabl is verifying that something should not happen, it does not attempt recovery. The intent of the test governs the response. Every action is logged so teams stay in control of what changed. The default mode is zero risk: mabl attempts recovery and logs everything, but the test is still marked failed until the team decides to trust it further. When teams are ready for autonomous recovery, that option is configurable.

Conversational Results Analysis

mabl has always triaged failures automatically, surfacing root cause analysis and recommendations for every run. That baseline is table stakes at this point. What changes now is what happens after the initial analysis.

The results experience is now built around conversation. When a failure occurs, teams can ask follow-up questions for individual tests or entire test suites, and mabl investigates iteratively, forming hypotheses and fetching evidence as the conversation develops. Questions like "What broke in this deployment that worked in the last one?" or "Is this flaky or a real regression?" get answered directly, and with supporting charts and screenshots included in the analysis. The full report is exportable and easy to share, so the right people have context when release decisions get made.

Atlassian Rovo Integration

Most quality workflows still require leaving the tools where work happens, finding information somewhere else, and bringing it back manually. For release decisions, especially, that friction adds up.

mabl's testing intelligence is now available directly inside Jira and Confluence through Atlassian Rovo. Teams can trigger test runs, investigate failures, check coverage, and assess release readiness without leaving Atlassian. The full workflow, triggering a run from a ticket, getting root cause in context, and having results posted back automatically, stays inside Jira end to end.

Built on eight years of AI-native development

These five capabilities are part of a platform that has been accumulating AI-native investments since 2017, and they ship alongside more than 20 additional capabilities mabl has released since January. The difference between mabl and platforms that have recently added AI layers shows up in how deeply the AI is integrated into every decision our platform makes. Element interaction, assertion logic, failure triage, recovery behavior — these are not summaries layered on top of a test runner. They are the test runner.

Other platforms in this space are adding AI management layers on top of existing automation infrastructure and asking teams to orchestrate the results. mabl's job is to handle that work so teams don’t have to.

Active Coverage is live today

Everything shipping today is built for teams at every stage of the agentic development journey. Whether you are managing a growing test suite, dealing with maintenance overhead that has gotten out of hand, or running a workflow where PRs are merging faster than any human team can manually verify. Engineering leaders get a verification layer that scales with their pipeline automatically. Quality teams get to spend less time on infrastructure and more time on the work that actually requires their judgment.

If you are already on mabl, log in and explore what is new across the platform. If you are evaluating your options, start a free trial or talk to our team.