A continual problem with test automation in the DevOps pipeline is maintenance. I could spend hours sharing posts and blogs about this matter, telling stories about how it affected our team’s capacity and time to market, and giving you several examples of what teams have done to sort this problem out. In fact, all the effort that we invest in maintenance is not visible to the business stakeholders’ eyes. Maintenance consumes our team capacity, but it does not add value to our clients directly.
Test automation challenges at scale
In one of my jobs we faced this problem on a large scale. We had more an average of more than 1000 automated tests running at each code check in, plus one big run at midnight that we called full regression. Our suites where organized in those levels:
- L1: Unit testing – Development Stage
- L2: Integration Testing – Commit Stage
- L3: UI Functional Test MVP Suite – QA Stage
- L4: Regression suites – Full package – UAT Stage
Those testing levels were our creation, but we used the great lessons shared in this book that I recommend for any team looking to build their testing strategy.
After each run, we noticed that 80% of UI and regression automated tests failed
- Changes in UI components.
- Bad coding.
- Exceptions and errors handling
- Time outs everywhere
A new experiment
We had to make a decision. We had invested lot of efforts in testing automation, and we were the product team with more automated tests compare to the rest of product teams globally in the company. This brought us attention from lots of managers, who were keeping their eyes on our DevOps and test automation activities. In addition, we were participating in a DevOps competition which involved all the IT teams of the corporation. So, we decided to invest time to get the failed tests in good shape and have them passing in the pipeline. We called this mindset “Keep them green and running”. We communicated this effort to all the product teams. We asked the Product Owners to consider maintenance time as part of their team capacity, include tasks for this purpose, and keep it part of their definitions of “ready” and “done."
Unfortunately, we did not achieve the goals we had hoped for. We still had the same test results, with more than 80% of tests failing. In almost every case, they were fake failures that meant that the tests were stopped long before their last steps. To sum up, our rest results were not accurate at all and did not bring confidence to our teams — which is something critical in testing automation.
Thinking outside the box
We needed to try something different. After some discussion, research, investigation and thinking out of the box, we decided to use gamification to keep our tests in a good shape, running and be green. We named the game “Game of Testing” and we established some critical points about it:
- Maturity levels: The gamers can be developers, testers or anybody who owned an automated test running in the pipeline, they were maturing in the game, from a beginner level to a master level.
- Visibility: We shared the game boards including ranking, test results and so, on using big monitors in the office, allowing everyone to be aware of the gamers and results.
- Weight: This is a very complicated and mathematical part, we needed to stablish formulas and weights for a point system for the game, so we could evaluate the test results, data per gamer and teams, and gave points and make them get new levels.
- Recognition: We tried to recognize all the gamers taking part. For the best gamers, we shared the results with local and global senior managers, so their achievements could be known and recognized. This helped encourage more team members to participate.
The game was quite simple:
- Gamers: All the team members who owned an automated test running in the pipeline.
- Point system: A gamer won
points for test running in green and negative points if the test failed for fake results. Later we added negative points of breaking builds.
- Rules: You needed to keep all your
testand team tests running in the pipeline in a good shape for each run. So, you had to invest efforts on “maintenance” and continuous improvement if you want to rock in the game.
This approach helps us a lot to decrease false positive runs and keep the tests up and running in the pipeline. We decreased the number of 80% of failed tests to only 20% in approximately three months and our numbers have continued decreasing. Furthermore, people had fun while playing the game and they internalized that quality can be fun and not boring like usual maintenance was.
While we got several criticisms about our failing tests at the beginning of our efforts to improve them, and people lost trust on our quality assurance process, we didn’t give up trying to fix the problems. In fact, it motivated our team to think out of the box and turn this problem into a useful and fun project. We spent several hours investigating continuous testing, scripting, and related topics. Besides that, we wanted to do something different, something motivates other team members to join.
Gamification can provide that motivation. You may not achieve desired results immediately, but if you prepare your environment and involve everyone in trying potential solutions, the game will give you results for sure. If you address the test issues you can get them pass or fail, but if you involve and motivate people to work on problems together, you always win and have fun. I recommend you try gamification to sort your problems out, you may be surprised at the results!