Though test automation has made software testing faster and more efficient, quality teams are still struggling to prioritize end-to-end testing to maximize test coverage. It’s challenging to define the end-to-end scenarios similarly to unit test coverage, particularly when teams only have a small set of test needs outlined. The issue becomes even more complex when new application features are added since there’s no easy way to determine where more end-to-end tests are necessary. The result is often a testing strategy that becomes less efficient as the product evolves, slowing the development of new features and creating opportunities for competitors to steal customers with a more capable user experience.
My team at mabl is no stranger to this challenge. So we developed our page coverage feature, which uses machine learning to group URLs to show end-to-end web app coverage and suggests how to prioritize adding tests.
The Problem with Prioritization
Software testing is a persistent challenge when accelerating production cycles: 43% of developers say testing is their biggest pain point and 36% say testing and QA hold back production schedules. But despite these difficulties, a thorough testing strategy is essential for ensuring customer satisfaction and adopting DevOps. Test prioritization based on real customer data helps quality engineering teams expedite software testing while also ensuring a quality customer experience.
Where (and How) mabl Integrates Automated Testing into Development
To best prioritize each type of test, it’s important to consider when your team tests in the development cycle and how often commits are happening. As an Agile team that emphasizes testing early and often in the development cycle, the mabl team has a high cadence of daily and weekly commits. This level of productivity is made possible by our testing strategy, which emphasizes shift-left testing so bugs are caught early in development. When defects are spotted early, they’re faster and easier to fix, making it possible for our entire team to be more productive.
As you can see, testing is integrated with development from the onset. Our standard practice is creating a local version of the application, which allows the mabl team to run unit tests, integration tests, as well as actually run automated end-to-end tests with mabl on the local version of the app. So that even at this very early stage, we're making sure that we're not breaking anything unexpectedly.
Once this initial round of testing has assured our team that the new feature is working as expected, we’ll create a pull request to gather input from other team members. This triggers another run of our unit tests, integration tests, and end-to-end tests. At this stage we’ll also set up an ephemeral environment that allows anyone on the team to perform exploratory testing.
When we're confident that the feature is working as expected, we’ll deploy it to our development environment, which also means merging to our main branch. Doing so means the mabl team will again run the full set of unit integration tests, as well as a complete set of smoke and regression tests. Every time we add a new feature, we add new tests to those test suites to make sure the new feature is tested at this part of the process. Our team will also run a set of end-to-end tests to check the performance of any third party services. Since this is the final stage before sending code to production, we want to be sure that we're testing very thoroughly.
How mabl Prioritizes Tests
Going back to our three core test types: unit tests, integration tests, and end-to-end tests, we have specific goals that ensure each test is contributing to a better user experience in mabl’s test automation platform.
With unit testing, we start with line versus branch coverage. Line coverage will show if your tests actually execute every single line of code within your code base, while branch coverage will indicate if the test is executing all possible outcomes. When balanced effectively, these tests ensure that the building blocks of the feature have been tested thoroughly. Setting clear, enforceable goals around line coverage helps maintain testing and quality consistently across your entire product. For example, at mabl, we have a set rule that you have to have at least 90% line and branch coverage on any new changes to our user interface. Otherwise, you can't merge it.
Similarly to unit tests, integration test prioritization can be guided by a coverage goal. These goals enforce the importance of testing and encourage the team to break their own code by mimicking customer behavior. But it’s most important to consider how different software components will interact, and how those interactions can malfunction.The mabl team uses a combination of mabl tests, the React testing library, and JUnit to ensure that we’re testing as thoroughly as possible at this step. We’ll also start adding permissions-based tests to make sure that our customers have control over their test data and workspaces. Finally, we require new integration tests for every single feature so all new work is covered and these features are incorporated into future tests.
This takes us to end-to-end testing, which validates when multiple subsystems are working together and checks that third party systems are functioning. End-to-end tests may overlap with your integration tests, but effective end-to-end testing ensures that the full customer journey works as intended. How your team prioritizes end-to-end tests is largely dependent on the most common customer journeys and user needs. At mabl, we focus on the most high-traffic pages in our test automation application so that our customers have the best experience possible. We also heavily rely on mabl features that help our team leverage real-world customer data to ensure that our testing evolves with our customers’ needs, such as the page coverage feature.
[H2] Using mabl’s Page Coverage Feature
The page coverage feature uses machine learning to combine similar application URLs to give mabl users useful insights about real application usage. Let’s look a hypothetical example:
These are a few URLs that a customer may see in an application, which are modeled after actual mabl URLs. You’ll notice that two of these pages are actually customized versions of the same page, meaning that they’re only visible to a specific user and share general functionality. Instead of testing both pages on an individual basis, our team can save time and effort by focusing testing on the general functionality the pages provide. This logic extends across the mabl app: rather than test every page, we can prioritize our testing far more effectively by focusing on the most commonly used functions.
The page coverage feature helps sort through this type of data to cluster similar pages for more efficient automated testing. Looking again at the example URLs above, you’ll notice that the URLs with random IDs share enough common characteristics that it's possible for our algorithm to group them together, while differentiating between the main workspace, the settings page, and the user home pages. To prioritize how we test these pages, we want to make sure that we know that two people visited the first page, one person visited the second page, and then two people visited the third page. With effective test prioritization, we’d know to focus testing on these three pages, as opposed to covering all five of the pages above, as well as prioritize each of these pages based on the counts of unique visitors:
In practice, mabl learns these patterns for each customer by crawling their application, as well as using data from integrations like Segment, which can used with mabl for automated testing that truly reflects the habits of your customers. When using the page coverage feature in mabl, you’ll see results like this, which capture the number of daily users for an individual page as well as the unique tests, test steps, and assertions that have interacted with the page.
Optimizing Software Testing for the Customer Experience
As I mentioned above, the page coverage feature is an invaluable tool for prioritizing tests in ways that reflect how your actual customers use your website or application. When your automated testing strategy is driven by customer data, it’s more likely that testing catches defects before they reach your users.
When you don’t account for how your users are interacting with your application, you risk testing customer journeys that aren’t actually taking place, rendering testing less efficient and potentially slowing down development cycles. With usage data integrated into testing, you can make sure that you have an acceptable level of performance for all your customers. This helps ensure that new features are actually useful for all users, building stronger customer relationships that benefit the overall business. On the flip side, adapting testing strategies to incorporate unexpected user behavior helps ensure that tests capture the defects hidden in surprise user journeys before they reach the customer. The overall impact: testing is consistently able to evolve with the product and the customer to maximize QA value. The page coverage feature makes it easy to consider real customer usage data as quality teams update their testing strategy to accommodate new features.
Test Prioritization Enables Quality Engineering
Test prioritization with real-world usage data allows quality teams to streamline testing and grow their impact on the product, the development process, and the business itself. Machine learning, combined with your actual usage data, helps quality teams create and maintain an efficient testing strategy that supports a better user experience and catches the bugs well before they reach your customers. At mabl, we practice what we preach by using our test automation platform to integrate testing early into our development cycles, as well as continuously update and expand test coverage as our application evolves.
See how mabl’s page coverage feature works for your team with mabl’s two-week free trial.