Back to the blog

Peer Review Guidelines for Low-code Automated Tests

Picture of Dan Belcher
byDan Belcher

Peer review plays an important role in helping to drive software quality. For modern software development, peer feedback and approval can be our most powerful tool in helping to manage the quality of our codebase. Across all of our artifacts and deliverables (code, business requirements, user stories, architecture, designs, test cases, automated tests, and more), we rely on peer feedback to help us grow and evolve our thinking.

In recent years, low-code1 test automation solutions such as mabl have emerged as alternatives to traditional script-based frameworks such as Selenium. Using “record and replay” test authoring, these solutions expand the population of people who can create and maintain automated tests. Leading solutions such as mabl employ novel approaches to dramatically improve the time to create and maintain automated tests. However, achieving the full benefit of low-code test automation requires specialized knowledge and awareness of leading practices. Peer review is a great way for teams to help improve the quality of their tests while improving the relevant knowledge of the entire team.

At mabl, we are our own customer first; our team has spent 3+ years using mabl to test our product and website, and we spend a great deal of time helping our customers review and improve their low-code tests.  We thought it would be useful to share our review guidelines in the hope that it will help you improve your own test reviews.

Mabl’s test review guidelines

Review the test title, description, and metadata

Open the test page in the mabl application and review the high-level descriptive content.

Review the test step descriptions

Again on the test page, review the individual test steps.

Review the test in action

If possible, replay the test in the Trainer (or equivalent) step-by-step in order to observe the actual functionality that the test is exercising.

Review the plan/environment/application configuration

Consider how the test will be run and whether the configuration could be optimized.  For example:

Re-review the test steps

After stepping through the test, review the steps again and consider the following

Consider the Overall Structure and Implementation Approach

After re-reviewing the steps individually, step back and consider whether there are any general opportunities to improve.

Sharing and learning to improve quality

We hope these guidelines help you improve the quality of your low-code tests and, even more importantly, serves as a catalyst to support sharing and learning across your team. To get the most out of your reviews, you might also consider periodically debriefing in person as a team to review a set of reviews and find ways to improve that process--whether by adding to (or deleting from!) the guidelines, identifying opportunities for training, and more.  And to that end, if you have suggestions for the guidelines, please don’t hesitate to reach out!

Many, many thanks to some of our favorite customers who took the time to review and add feedback on the guidelines, including Dai Fujihara, Troy Carter, Thomas Noë, and Jonathan Kuehling.

Want to give mabl’s low-code test automation a try? Sign up for a free trial today!


[1] As an industry, we have not yet converged on standard terminology for these new solutions--using “low-code,” “codeless,” and “scriptless” as synonyms. Generally speaking, I believe the most accurate term is “low-code” given that solutions like mabl do, optionally, support custom code and other programming concepts such as loops, conditionals, parameterizations, and so forth. For a more precise definition of “low-code” see Wikipedia.

Back to the blog

Related posts