The mabl blog: Testing in DevOps

How Real-World Development Teams Are Using AI to Improve Software Testing

Written by Bridget Hughes | Jul 6, 2023 12:00:00 PM

AI and machine learning are everywhere nowadays, though perhaps not as ubiquitous in software development as the hype would make it seem. According to Stack Overflow’s 2023 Developer Survey, the developer community is most excited about using AI for testing code.  
The above chart, taken from the latest  Stack Overflow Developer Survey, illustrates the diverse ways in which developers hope to harness intelligent tools. The top categories of interest - software testing, documenting code, and committing/reviewing code - all have one thing in common: they’re the tasks that most often take developers away from building new features and products. In a world where developers spend less than half of their time actually coding, the interest in applying AI to non-coding tasks makes sense from an innovation and a developer experience perspective. 

AI-Backed Test Automation is Already Supporting Development and Quality Teams

Though autonomous testing, where software testing activities are performed completely independently from human intervention, is still a few years away, AI and machine learning are already helping developers and software testers improve testing efficiency. Deployed in the real-world, these intelligent innovations are reducing the time and effort needed to improve product quality, unlocking more value within development pipelines and addressing developer hopes for the future of AI in software development. 

AI Reduces Test Maintenance Through Autohealing 

Autohealing tests use AI to capture a multitude of unique element attributes during test creation and execution, which help tests automatically update in response to UI changes. Smart element locators offer an in-depth and adaptable approach to identifying app changes, drastically reducing the time needed for test maintenance. 

For NetForum Cloud, AI-backed autohealing capabilities were an essential part of maturing software testing for digital transformation. Though their team of 25 QA professionals had extensive experience in automated testing with traditional frameworks like Selenium, an increase in pipeline automation and UI upgrades demanded more efficient software testing. With autohealing tests, their team was able to increase automated testing by 40% and reduce manual testing by 20%. Less test maintenance had a powerful domino effect for the entire NetForum Cloud organization, leading to better development practices and less downtime for customers. 

Machine Learning Reduces False Positives in Software Testing

Few things are more frustrating or time-consuming than spending a few hours investigating a failed test, only to realize that the failure was caused by inaccurate timing. Intelligent Wait harnesses machine learning to reduce test failures by incorporating historical application performance into the timing of actions within tests. During each test run, timing data is collected for each step and automatically tailors test execution to match the pace of the application. By mitigating the need to insert manual wait steps or other cumbersome configurations, quality engineering teams and developers can improve test reliability and reduce false positives without any extra work, saving them valuable time and effort. 

QA Manager Janet Bracewell shared how AI and machine learning-backed test automation has made an impact on her team at SmugMug:

“The Unified Runner with Intelligent Wait has allowed our team to focus on improving our product and the user experience, rather than managing tests. The faster, more consistent execution across local runs, cloud runs, and CI/CD headless pipeline runs has been instrumental in showing the value of testing across the development organization.”

Intelligent Wait and other AI-backed test automation capabilities are already helping development organizations stay focused on building new features, which aligns with the broader developer needs indicated in the Developer Survey. 

Building a Foundation of Trust for Autonomous Testing 

In addition to highlighting the broad interest in AI-supported software testing, the 2023 Stack Overflow Developer Survey illustrated the need for trusted partners in intelligent testing, particularly as autonomous testing becomes possible. Less than half of developers trust the outputs of AI tools, indicating that people will need the skills and knowledge to decipher algorithms and debug intelligent tools. Starting the journey to autonomous testing now, with test automation solutions designed to democratize advanced capabilities and build on people’s existing skills, is essential for adopting AI and machine learning tools that actually address the needs of developers, quality teams, and their customers. 

Explore the exciting world of intelligent test automation with mabl’s two-week free trial.