Intelligent test automation solutions such as mabl are helping teams ship higher quality software faster by improving test coverage and making QA more impactful. Due to their ease of use, these low-code tools are also making powerful test automation accessible to many people for the first time. While it is useful (and fun!) to dive right in and create lots of tests when you start using a new test automation tool, our experience is that you’ll be most successful in the long term if you take a strategic approach to your test automation. Here are a few of the strategic lessons that have helped many teams get off to the right start in their test automation journey.
Start with goals, communication, and alignment
As Watts Humphrey, a pioneer in software quality, once said, “Unplanned process improvement is just wishful thinking.” One of the keys to success in leading a test automation effort is recognizing that you’re driving both a technical project and an evolution process, and both need to be planned. When you’re ready to get started, try to carve out time with all of the key stakeholders of the project to discuss and agree on the following, minimally:
Goals - What are the objectives of our automation efforts? How do we define and measure success? How can we best align with organizational and company objectives?
Roles - Who is responsible for all of the aspects of quality within your team, such as defining test cases, creating tests, triaging failures, updating tests that are out of date, determining when tests should run, etc.?
Plan - What are the stages of your test automation rollout? What are the key tasks that must be completed in each stage? What is the timeline?
Process - How will you communicate? How will you collaborate? How will you manage the tests, plans, etc.? How will you monitor progress, incorporate lessons learned and iterate to improve over time?
As with any new engineering project, a little up-front work on goals and expectations will go a long way. Likewise, just as software projects are iterative, revisit these your goals, communication, and alignment regularly to hone your approach.
Be thoughtful about test coverage
End-to-end tests, by definition, capture your customer’s journey through your application. Your tests should reflect this perspective and focus on the needs of the end user. Compile a list of these journeys and prioritize them. They may already be available under a disguise of acceptance criteria. With your automation budget in mind, make sure that you have tests implemented for the most important flows first. Do not treat this list as a static resource. Reevaluate it regularly with business stakeholders and if possible, factor in customer usage data. Remember that customers are the true judge of quality.
You may be inclined to broaden the scope of end-to-end tests, but resist the temptation. For example, boundary testing, input validation testing, performance, stress, scalability testing are likely best accomplished at different test levels such as unit, integration or system testing. End-to-end testing is not the place for testing low-level logic present in your code - implement this functionality in unit tests. Unit tests exercise the product from the “inside out” while higher level tests, such as end-to-end testing, have an “outside in” perspective.
Take a whole-team approach
One of the most important benefits of “shift left” is the ability to involve the whole team in quality; it’s no longer the QA engineer on an island with full accountability for product quality. Instead, QA, developers, support and others partner to build quality into the product from the start.
Low-code testing plays a crucial role in this approach by making test automation accessible to the entire team, without requiring software development expertise. This means developers can create and run automated tests locally before they commit code and QA can work to expand or refine those tests either before or after the commit. Likewise, they can partner--along with product owners and others--to create and maintain an end-to-end regression testing suite, while those focused on operations and support can take advantage of these low-code tests for synthetic transaction monitoring in production.
If you’re using low-code tools like mabl, get the whole team involved from the start to improve collaboration and build quality throughout your software development lifecycle.
Invest in efficiency, speed, reuse, avoid duplication
Many software development best practices apply to your low-code tests just as well as your product code. As with product code, investments in reuse, speed, and efficiency across your test suite will pay significant dividends in the long run.
Typically, end-to-end tests take longer to execute than unit tests, and they are often executed as part of CI/CD pipelines. To shorten the feedback loop to your team, investigate how you can make your tests run faster. You may be able to break up large monolithic tests into smaller chunks and run them in parallel. Similarly, you can design tests to be independent of each other - for example by using different user accounts - to enable parallel test executions. Tests that no longer provide value, never fail, or cover user stories that customers are less likely to use should be considered for removal.
Avoid anti-patterns such as using static waits, which should be called time bombs because they’ll likely make your tests flaky. Instead, work with your team to come up with a dynamic wait strategy that has a minimum impact in the common case, but can tolerate occasional time increases.
As with product code, end-to-end tests benefit from reuse to minimize maintenance effort. Reusing certain test steps makes it easy to update a large suite with a minimum number of changes. Multiple tests for example, may share the same login or navigation steps. Consider creating a library of these reusable steps that are easy to discover and leverage by other team members. Likewise, unless it is part of the behavior that you’re trying to verify, avoid hard-coding any environment information, including host information, location data, browser-specific information, or otherwise into your test steps, as this limits reuse of the tests themselves. For example, you may want to test the same application flow with different user profiles or roles. Rather than creating distinct tests for each role, you can create one set of steps to reflect the actions and assertions that you want to verify and parameterize the user role.
Look at quality signals continuously and define triggers for actions
Quality shifts the last couple years from a rigid way of working towards a flexible continuous evolution which completely changes how the QA engineer works or acts. Each minute, hour or day new information flows into the team which could impact the way the team acts, tests, validates, thinks and so forth. The team should visualize the different inflows and do something with that in an efficient and structured way.
Most of us do have a communication tool like Slack in these remote working times. You may have different Slack channels which needs the team's attention like the support channel where new customer bugs are being dropped, a channel where test/CI/CD failures trigger certain teams or people, or newly created team bugs are dropped in a Slack channel. Each of these actions are triggers for the team. It is up to the team to group these quality inflows and act upon them. The QA engineer is triggered to think about these quality inflows and should potentially rework certain low-code tests to improve the impact based on valid feedback. A sudden increase of bugs in a certain area should alarm the QA engineer to improve the coverage around that area. This should be discussed with the team to find out where this extra coverage could be created. Tests close to the code could be created, or further away from the code if they are too difficult like the low-code tests.
Ensure that test failures drive action
Time and time again, we have seen teams struggle to achieve their desired quality results when they fail to fully integrate testing into their team’s workflow. With the appropriate buy-in (see strategy 1), the best approach is to set clear expectations that test failures must drive action.
Practically speaking, for many teams, this will mean that test failures within a CI/CD pipeline will prevent changes from proceeding until the failures are addressed either by updating the tests, fixing the issue, or adjusting the test environment. You should consider whether test failures must be linked to a tracked issue (in Jira or otherwise) that gets triaged, prioritized, and addressed. The most successful teams take the time to document and classify their test failures for trending purposes. Classification is critically important because it can help you identify patterns or clusters of bugs, environment issues, and test design that inform a feedback loop to help the team improve and become more efficient over time.
One of the most significant benefits of the new generation of test automation tools is that they can make testing more enjoyable.They reduce repeated, monotonous manual checks. They eliminate a lot of the frustrating flakiness of legacy automation frameworks. They make it easier to collaborate across the entire team. Hopefully these strategies help you make the most of your investment in test automation and have some fun along the way!
Want to give mabl’s low-code test automation a try? Sign up for a free trial today!
*Special thanks to Kashyap Prasad and Thomas Noë for contributing to this post!