It’s been about 60 years since the advent of machine learning, and it now finds application in almost every field. The insurance industry employs machine learning to project the extent of losses they will incur from a natural disaster. Machine learning now features prominently automating some aspects of automotive travel, cancer diagnosis and research, and trading stocks.
We recently took a survey that reveals some insights on how respondents from the testing community view the challenges of having the right tools to test properly, testing efficiently in the age of Continuous Integration, and also how difficult it can be to find good job candidates. One respondent can see where the QA profession is heading:
“All testers need to learn more technical skills—scripting/software development, devops, machine learning—[because] there will be fewer testers with their main job is just test case execution.”
Another respondent clearly sees the value of increasing automation in specific areas:
“If you can maintain and trust your automated testing, you can get software to customers faster. [Machine learning] will reduce the need for resources to spend valuable time maintaining tests, which can then be better spent reducing risk by planning and expanding breadth and depth of test coverage.”
The essence of testing
Merely interacting with an app isn’t testing it, of course. Machine learning tech that only interacts with an application doesn’t really provide any testing value. Testers know this full well, and good testers bring much more to the table. We work hard to understand the business and maintain a set of heuristics that will help expose defects. We strive to adopt the persona of the worst—and best—users of the application. We seek to balance the interests of the company and its customers. All of this is best done by thorough exploration of the product and carefully thinking through possibilities and potentialities.
Perhaps most important to remember it is very, very challenging to code high-automation tests such that they can handle a wide variety of subtle and intricate details. When it is achievable, maintaining such tests often requires as much effort as manual testing—since there is often much debugging to be done in dealing with brittle tests. This leads us to see the need for AI / ML testing is even more urgent.
Many companies who successfully add value to their testing efforts with ML have shaped the algorithms to discern whether the outcome of a particular action would likely reveal a defect. They have come to know, with a high level of confidence, when specific actions and/or results deviate from expectations. This is a much better explanation of testing, is it not?
The primary challenge in testing automation
James Whittaker, who wrote the book How Google Tests Software, says testing is more difficult than writing software. “You have to be smarter than the programmer to find problems in the code.” Testers love the sound of those words; software engineers are skeptical.
A conventional testing engineer doesn't code, which means they won’t readily accept an invitation to be an automation test builder. Also, there's a cultural challenge, since manual testers didn't hire in for automation. Conversely, not many developers want to switch to testing.
If you attempt to lead a transition of conventional testers (that lack development expertise) over to test automation staff, the result will be entry-level developers with little experience. The established development team will balk at having too many novices on the team. Moreover, testers who attempt to convert too rapidly tend to produce disorganized, inefficient, copy-and-paste code that's buggy, and difficult to maintain.
When you dare to venture down this path, seek a transition strategy that works, and plan it out. You need clear objectives, sober expectations, quantifiable transition costs, budget extensions, and timeline estimates. Perhaps most importantly, conduct a realistic analysis on the motivations and capabilities of each person on the team.
We're not alone...
Artificial intelligence and machine learning is on the rise and is becoming easier and easier to leverage for practical applications. Consider just a few examples:
AutoML: Artificial intelligence used to belong only to mad scientists in science fiction books and movies. That's no longer the case with Google’s AutoML initiative, focused on creating machine learning software that can design machine learning software. With AutoML, you build different algorithms that compete with each other, pick the winners of that competition, have the winners compete, and iterate. The development team is making progress toward AI that’s easier to code by offering the user a simple graphical interface to train their own machine learning models. Presently, the service only runs image recognition—in which users drag-and-drop a set of pictures, and then watch the software chooses recurrent elements or items. Urban Outfitters has been testing how Cloud AutoML can be useful in identifying specific items of clothing in their catalog, to help users filter on specific attributes.
Azure Machine Learning Studio: Big data and the rising need for real time, actionable data analysis are growing drivers for Data Scientists and Analytics to use machine learning. Azure Machine Learning Studio is a drag-and-drop tool you can use to build, test, and deploy predictive analytics solutions on your data. You drag-and-drop datasets and analysis modules onto an interactive canvas, connecting them together to form models which you can then publish as a web service so that your model can be accessed by others.
Amazon Macie: Security is another fast-moving, booming field. Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Amazon Macie recognizes sensitive data such as personally identifiable information or intellectual property, and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved. The fully managed service continuously monitors data access activity for anomalies, and generates detailed alerts when it detects risk of unauthorized access or inadvertent data leaks.
For testers, the rise of artificial intelligence and machine learning doesn't mean an impending apocalypse. The challenge will be how to leverage machine learning to help human testers do their jobs better and faster. Joe Colantonio notes that the third wave of test automation is here, and most of the tools in this wave are leveraging machine learning and AI-assisted technology.
Humans versus Pseudo-cogitating Machines
AI and machine learning won’t annihilate testing, but testing will become considerably more difficult as we confront applications with machine learning tools—for the simple reason that we won’t know how to constrain the application in all cases that a machine learning engine presents. For the very difficult problems, machine learning makes choices according to probabilities, not certainties.
For those testing professionals who won’t maintain an interest in what humans will continue to do exceptionally well, the future might be scary. It’s important to always remember that humans are fantastic at exploration, analysis, creativity, understanding, and in applying their learnings.
To date, most testers take a deterministic approach to their discipline. A computer only produces results that a tester predetermines as being either correct or incorrect. All of this changes when machine learning comes into view. Machine learning performs a much more extensive examination and preliminary analysis -- we’ll need to grapple with a significant number of indefinite results, and think hard about solutions to very complex problems.
Historically, the most difficult machine-to-human testing initiatives are those that are indefinite, such as maintaining the preconditions testers need in place to reveal defects in complex computing environments (like multi-threaded apps). Today, as machine learning is transitioning into mainstream software development, we’re already seeing that non-deterministic activity is getting more attention from the community. As testers, we need to seriously consider how fully we are going to meet the challenge of finding software defects that don’t fit with our preconceptions.
Staying ahead of the bots
In an age of constant evolution, it's no surprise that AI-driven solutions have come about to help us with our jobs. Will they replace testers as we move forward? Let’s stop to reflect on how professional testers are adapting to the new problem sets that we are already encountering. How will we stay ahead? More importantly, how can we become even more effective by leveraging the power of upcoming machine-learning test tools?
One thing is certain, if you want to remain effective in this new age of testing automation, you can expect your skills and your role to undergo significant changes.
Are you ready for the revolution?
Start by giving mabl a try!