Mabl has seen incredible growth over the past 5 years, with hundreds of enterprise customers running nearly 40 million tests per year on our platform as part of their efforts to build premium customer experiences through higher test coverage and quality across their applications. 

We’re fortunate to have an engaged and passionate customer community that’s always willing to share their insights and quality goals with us. Since launching in 2017, their single biggest request has been to help them with automated mobile app testing in the same way we do with web applications.

As part of our journey to solve this critical and costly challenge, we had dozens of conversations with fast-growing startups, established mid-size companies, and Fortune 2000 enterprises about the current state of mobile app testing and mobile app testing tools. This article is a reflection of what we’ve learned from these discussions. 

The State of Mobile App Testing

First, the dataset: This research comes from roughly 70 unique customer conversations completed live via Zoom. We opted for live conversations so we could dive deeper into certain areas, as opposed to an online survey where we wouldn’t get the same depth of insight. Titles ranged from QA practitioners and developers to vice presidents, and CTOs, and companies ranged from startups to several Fortune 2000 companies. The data is skewed towards customers in the United States, though approximately 15% of the conversations included customers in Europe and Japan. 

We dug into a variety of topics, including challenges with mobile app testing, existing approaches used to test mobile apps, the number of tests companies currently have for mobile apps, as well as release velocity and device preference (real devices or virtual). Here’s what we learned: 

Manual Testing Dominates Mobile Application Testing 

64% of the teams we spoke to still approach mobile app testing from a manual perspective. Like software testing for web applications, many organizations are struggling to find enough people to create and maintain an automated testing strategy. Tests are often brittle and flaky due to frequent changes in the application under test, a challenge that’s compounded by a growing number of possible device and operating system combinations. 

Maintaining stable environments also proved to be a consistent obstacle to scaling an effective automated mobile testing strategy. Even if a QA team creates automated mobile tests, getting them to run successfully often takes as much time and effort as creating scripted tests. 

For the 36% of teams automating mobile app tests, the most frequently referenced mobile automation tools were Appium, Detox, Maestro, and Espresso, in that order. 

Rapid Release Schedules Put the Pressure on Quality 

In contrast to web applications, which we found to be developed or updated in two-week sprints, a plurality (36%) of mobile applications are updated on a weekly basis. Only a very small portion of customers are releasing monthly, with the remainder releasing every two weeks. Based on the data in our Testing in DevOps Report, that aggressive cadence challenges even the most mature DevOps teams. 

Deployment frequency by DevOps adoption stage

The higher deployment frequency is likely due to the competitive pressures companies face with mobile applications. Customers on mobile applications have higher churn rates and a lower tolerance for buggy interfaces or poor performance, making it essential to stay up-to-date on usability and features. The weekly sprint cycles could also be due to the frequency of iOS and Android updates and the need to quickly fix bug fixes to remain compliant with app store policies, both of which push app providers to avoid regressions by staying current with OS releases. 

Common Mobile App Testing Tools and Approaches 

Virtual Device Testing versus Physical Device Testing

When it comes to mobile testing, companies have two primary options. They can test on real physical devices to get the most realistic user perspective, or they can test on a virtual mobile device emulator. Typically, the latter approach includes testing on simulators for iOS and emulators for Android. This approach is much less expensive as customers don’t need to provision their own phones or have access to real devices on device farms such as AWS Device Farm or BrowserStack. 

Despite the cost disparity, our interviews found that only 19% of users test exclusively on virtual devices. The majority of customers are testing mobile applications solely on physical devices, or on a combination of physical and virtual devices. Our hypothesis is that the hybrid approach emerged after customers found bugs after testing only on virtual devices, pushing them to introduce real devices to their mobile testing strategy.  

Mobile Application Test Suites versus Web Application Test Suites 

Regardless of their reliance on virtual or real devices, customers are running a high number of tests on their mobile applications. Of the customers who provided their test suite data to us, the average team had 83 tests, while the median was 62. Even when removing the outliers of 15 tests and 179 tests, the median was still 62 tests for mobile applications. 

Given the small form factor of a mobile device, this seems to be a fairly robust test suite. For context, our customers using mabl for web app testing have a median of 67 tests per workspace. This demonstrates how important testing mobile applications is for businesses, even when dealing with costly and/or high maintenance mobile app testing strategies.   

What QA Teams Want From Mobile Testing Tools 

Finally, we asked our customer community about the main challenges they have with mobile app testing. We wanted to develop a better understanding of the obstacles associated with automated and manual mobile application testing, and explore if mabl’s low-code, cloud-native, and AI-driven approach to web application testing could be applied to the critical world of mobile. Unsurprisingly, web testing and mobile testing often have similar pain points. However, there were a few new challenges that surprised our team. The most common challenges we heard were:  

  • Environment stability 
  • Difficult to write automated tests 
  • Test maintenance/brittleness of test  
  • Speed to execute tests 
  • Variety of devices and operating systems that need to be tested 
  • Cost

Environment stability was top of the list by far. When test environments are unstable, enterprises risk deployment delays and are likely to have a harder time integrating automated mobile testing into CI/CD pipelines. 

Difficult test creation is consistent with web application testing. Regardless of device or platform, teams are struggling to write automation scripts and manage the ongoing demands of test maintenance. Scriptless test automation can help reduce the test creation challenge, but legacy no-code tools often produce tests that are just as brittle as scripted tests.

Other challenges are unique to mobile app testing and highlight the wide range of challenges still plaguing the industry. Mobile devices can experience performance and speed declines, whether due to network connectivity, power levels, or other environmental conditions. These factors make it hard to account for every single thing that can impact mobile apps. 

The sheer volume of devices and operating systems, not to mention regular updates, also contribute to the challenges of mobile app testing. A sufficient number of device and OS permutations can easily reach dozens of combinations to the hundreds…for each of an average of 83 tests! Imagine a world where you need to run 83 tests x 2 operating systems x 4 device types (n and n-1 for iOS and Android) every two weeks - that’s 1,328 tests/month! And these device x operating system estimates are low, given that there are 30 types of iPhone currently in use and 24,000 Android models on the market.

Trying to manage quality at this scale easily becomes expensive and time-consuming. For teams relying on manual testing, it can be nearly impossible to accelerate testing while maintaining a quality customer experience. For teams relying on virtual devices, costs can quickly become uncontrollable. For reference, AWS charges $0.17/minute for creating or running tests or $250/month flat fee per device. If we assume a customer has 83 tests, each of which take 10 minutes to run on a biweekly basis, this would cost $260/month for just one type of device and operating system. Attempting to cover all 30 iPhone models and 17 operating systems could cost $132,600 per month….for just iPhones. 

To sustainably manage mobile application quality at the scale and speed demanded by a mobile-first, DevOps world, QA teams want - and need - automated testing solutions that help them overcome these challenges. 

Building the Future of Mobile Application Quality 

These interviews, though truly insightful, only scratch the surface of mobile testing needs. For example, mobile app security testing, mobile app performance testing, or even more nuanced areas like gestures and accessibility, are all acute quality needs that must be accounted for when building competitive user experiences. We’re incredibly grateful for our customer community for their ongoing support, feedback, and expertise as we dive into the needs of quality leaders. This type of in-depth research is extremely valuable and only possible with the buy-in of QA contributors and leaders around the world. 

Mobile application testing clearly has significant challenges and presents an incredible opportunity to increase the happiness of mobile app testers and make customer experiences even better!  Help us deliver a smarter, scalable solution for automated mobile app testing by joining our private beta