Test coverage is a tried-and-true software testing metric for good reason: it distills the reach of software testing into a quantifiable definition that’s understandable for QA, developers, product managers, and the C-suite. But the equation for meaningful test coverage is much less straightforward. 

How test coverage is measured can vary dramatically depending on the team and/or testing tactic involved. Developers, for example, are more likely to be looking at unit test coverage requirements needed to merge their branch. Product managers, meanwhile, may be more concerned with test coverage across happy paths or new features. Quality assurance teams need to reconcile these competing perspectives and establish test coverage goals that help their companies deliver quality experiences to their customers and accelerate deployment frequency.

Let’s look at the goals, testing tactics, and impact of good test coverage. 

What is Test Coverage….Really?

Ask any quality engineer what their current test coverage is, and they’ll likely share a percentage….or just respond with a long sigh. Whether a team is reaching 35%, 75%, or 95% test coverage, the number is only as meaningful as the data that informs it. Ultimately, the goal is fewer rollbacks or hotfixes, better user experiences, and less time spent on rework, especially late in development cycles when delays are more costly. 

To achieve this, test coverage must reflect three critical factors:

  • Critical components and integrations
  • New product features and functionalities
  • High-traffic user journeys

Sound familiar? These priorities reflect the testing pyramid. 

Using the test pyramid as a guide, quality teams can bridge the silos between unit testing taking place in the earliest stages of development and the comprehensive end-to-end testing needed to ensure good user experiences for meaningful test coverage targets. 

The Importance of Unit Testing in Improving Test Coverage 

Unit testing is vital for improving test coverage and shifting testing to the left. By running unit tests before merging with the main branch, developers can easily find and fix defects before they cause bigger (and more expensive) problems. 

Many organizations find that unit testing is a valuable first step at integrating automated testing into CI/CD pipelines. This often comes in the form of developers writing a test that exercises a discrete, well-encapsulated function that will return a result upon execution. The targeted focus of unit testing means that tests are quick to run and easier to maintain, making them ideal for developers working solo on new projects. 

Most development teams will set test coverage minimums for developers looking to merge a branch, which typically means setting a threshold for passing unit tests. At mabl, for example, developers must have 95% test coverage in order to merge. Though critical for maintaining quality, this requirement has resulted in a lighthearted competition to see who can get closest to passing without meeting the 95% standard. The current record stands at an entertaining 94.98%!

However, the narrow focus of unit testing means that relying on unit test coverage alone isn’t enough to meet quality standards, especially for churn-prone consumers. The quest for good test coverage demands a wider look at development and software testing practices. 

Bridging the Gap with API Testing

Over 90% of developers use APIs, and two-thirds say their APIs generate revenue. For many businesses, especially ecommerce or enterprise software companies, APIs are essential for completing vital actions like checking out or importing data. Good test coverage, therefore, must consider API and integration test coverage. 

Quality teams and developers have a role in ensuring that test coverage remains high across all APIs. Developers typically run API tests focused on internal APIs and contract testing for external APIs, which is closer to unit test coverage and is built earlier in the software development life cycle. 

But those tests don’t necessarily reflect how customers are interacting with APIs across the product. Quality teams can bridge this gap by expanding test coverage for external APIs, either as part of end-to-end tests or independent API tests. When integrated into end-to-end UI testing, API testing can help shorten test execution time and the effort needed to investigate test failures, as well as improve test coverage in ways that reflect the full user experience.  

Building an Automated End-to-End Testing Strategy for High (Value) Test Coverage

An important outcome of good test coverage is ensuring a positive user experience as the customer sees it, which means delivering quality across the entire customer journey. While test coverage for unit testing is a mathematical equation, this side of test coverage is more subjective and constantly evolving. 

Connecting test coverage to the full user experience is where QA expertise truly shines. Combining empathy and critical thinking with data allows quality leaders to build comprehensive end-to-end tests that capture user journeys like the example above. 

Comprehensive end-to-end tests that cover complex customer journeys with functional and non-functional testing have become significantly more reliable with the growth of AI in test automation. These automated tests performed late in development cycles aim to cover as much of the user experience as possible, including automated accessibility checks, email and PDFs, shadow DOM components, cross browser testing, and real-world scenarios

QA teams can continuously evolve their E2E testing strategy by turning to new data sources across the enterprise. Customer data platforms, which saw a surge in popularity post-2020, offer valuable insights into real customer usage across a company’s application or website. This allows QA professionals to improve test coverage across the most critical user journeys, ensuring that test coverage remains accurate and meaningful. 

Leveraging AI and Machine Learning to Improve Test Coverage

Different AI and machine learning techniques can be used to help QA teams identify coverage gaps and reduce test maintenance to deliver good test coverage as deployment frequency increases. 

Automated testing platforms that use AI and machine learning effectively can also help QA uncover test coverage gaps. Visual change detection, for example, can detect unexpected UI changes that would otherwise hurt test coverage. Unsupervised machine learning techniques like clustering can also be used to identify gaps in test coverage by grouping URLs to show end-to-end web app coverage and suggesting how to prioritize adding tests.

AI and machine learning have reduced the amount of time and effort needed to maintain automated tests, making it possible for quality teams to manage comprehensive end-to-end tests. Using unique identifying elements across an application’s UI, including shadow DOM components, AI makes these historically flaky and high-maintenance tests a routine part of a company’s automated testing practice. These comprehensive tests improve test coverage by giving QA in-depth insight into the full user journey. 

Combining Manual Testing and Automated Testing for High Test Coverage 

Automated testing reduces routine testing tasks so that QA teams have more time to improve test coverage across new features and edge cases with manual testing. 

Exploratory testing gives quality professionals the opportunity to test edge cases and unusual user scenarios, ensuring that customers always have the best experience possible. With their unique combination of UX knowledge and product expertise, quality engineers can constantly push the boundaries of their software testing strategy to deliver relevant, valuable test coverage. 

Similarly, manual regression testing ensures that any fixes or updates don’t break the existing user experience. Though regression testing is ideally automated for faster delivery cycles, manual regression testing can help improve test coverage across new functionalities or unexpected user journeys that only become apparent when an experienced software tester is interacting with the product. 

Combining automated testing with manual testing empowers QA teams to make the most of their time and talents to deliver good test coverage, meaning that common user journeys and new features had sufficient testing. 

[H2] The Impact of Good Test Coverage 

Test coverage distills several complex and constantly changing processes into a single number, pushing QA teams to balance competing priorities as they define “good” test coverage. In order to deliver excellent user experiences and shift testing to the left, QA teams need to consider unit test coverage at the earliest stages of development and the ultimate impact on customers.

Armed with the right data, effective automated testing tools, AI, and a fine-tuned manual testing strategy, QA can establish a definition of good test coverage that sets the foundation for a culture of quality.

Start your journey to better test coverage with mabl's 14 day free trial