The mabl blog: Testing in DevOps

Peer Review Guidelines for Low-code Automated Tests | mabl

Written by Gevorg Hovsepyan | Sep 15, 2022 12:00:00 PM

One of the most persistent - and least discussed - silos inhibiting DevOps adoption is process silos. While developers have ecosystems of tools dedicated to facilitating best practices like branching, peer review, and conflict resolution, quality teams often lack the same level of support for collaboration. As testing shifts to the left and covers a broader definition of the customer experience, quality teams need solutions that enable collaborative DevOps processes that match the accelerating pace of development. 

To help software testing teams adopt quality engineering, mabl introduced enhanced branching capabilities that enable everyone in QA to participate in best practices like peer review. Peer review plays an important role in helping to drive software quality. For modern software development, peer feedback and approval can be a powerful tool in helping to manage the quality of the codebase. Across all artifacts and deliverables (code, business requirements, user stories, architecture, designs, test cases, automated tests, and more), DevOps teams rely on peer feedback to share knowledge and evolve their product(s).

Low-Code Democratizes Software Testing and Testing Knowledge

In recent years, low-code test automation solutions such as mabl have emerged as alternatives to traditional script-based frameworks. Using “record and replay” test authoring, these solutions expand the population of people who can create and maintain automated tests. QA professionals without significant coding experience can quickly start automating tests, while developers can easily start running automated tests while they code. A shared responsibility for testing builds a culture of quality that makes it easier to build products that customers love. 

However, the road to reaching the full potential of low-code test automation is much shorter with proven best practices. Peer review is a useful, practical way for teams to help improve the quality of their tests while democratizing software testing knowledge across the entire team.

Modeling Peer Review Best Practices for Software Testing

When done correctly, peer review processes help build collective knowledge and evolve testing strategies in tandem with the product. Though the guidelines below may not be an exact fit for every organization, they’re a solid foundation for creating consistent peer review processes for teams looking to maximize low-code automated testing. 

REVIEW THE TEST TITLE, DESCRIPTION, AND METADATA

Open the test page in the mabl application and review the high-level descriptive content:

  • Is the test title descriptive? Is it clear what functionality or user journey is supposed to be tested?
  • Does the test follow naming conventions?  Will teammates be able to intuitively locate the test by searching, even if the suite becomes very large?
  • Is the description sufficient to understand the test goals? Does it follow our internal guidelines for linkage to user stories, test cases, Jira tickets, etc.?
  • Is the test saved against a branch other than main? Should it be?
  • Does the test follow labeling standards? For example, do the labels indicate the type of test (smoke, validation, API, etc.) area of the application, or feature?  Remember that you may want to trigger all tests of a given type or feature using labels or analyze test coverage by feature.
  • Do we understand the criticality of the test?  Should a failure alert someone off-hours?  Block a release?  Be logged for triage by the sprint team?

REVIEW THE TEST STEP DESCRIPTIONS

On the test detail page, review the individual test steps:

  • Are the test step descriptions intuitive? Do they clearly communicate the purpose of the step? If not, suggest adding or modifying annotations to clarify.
  • Is the test well-organized? If there are any areas that would benefit from logical separation, suggest using echo steps.
  • Are there hard-coded values that should be variables?  Should those variables be shared at the environment level?
  • Are variable naming conventions consistent?
  • Is the test data-driven?  Should it be?

REVIEW THE TEST IN ACTION

If possible, replay the test in the Trainer (or equivalent) step-by-step in order to observe the actual functionality being tested.

  • Are all steps passing?
  • Are there any anti-patterns in use, such as fixed wait times, XPath queries, or CSS queries? If so, is there an annotation explaining why?
  • Does each step achieve its purpose?  Will it prevent both false positive and false negative results?
  • Are there areas where the test appears to be waiting too long between steps? Could “wait until” steps or other modifications speed these up without compromising reliability?
  • Are there logical points where we are missing assertions? Are any assertions unnecessary?

REVIEW THE PLAN/ENVIRONMENT/APPLICATION CONFIGURATION

Consider how the test will be run and whether the configuration could be optimized. For example:

  • Is the test configured to a plan? Should it be?
  • Is the test enabled? Should it be?
  • Will this test be run in sequence or in parallel with other tests?  Have we protected against collisions between tests?  Are there cases where another test could lead this test to fail, or where this test could cause another test to fail? 
  • Will this test be run across browsers?  Could simultaneous executions create issues with authentication, application data, or otherwise?
  • If the test is data-driven, are the scenarios sufficient? Are each of the scenarios unique enough to contribute meaningfully to test coverage?

RE-REVIEW THE TEST STEPS

After stepping through the test, review the steps again and consider the following:

  • Are there opportunities for reuse/sharing to minimize redundancy across tests? Should we create reusable flows for any sets of steps?  Should we share any JavaScript snippets?
  • Are there “setup” steps in the test to prepare the application under test? Should they be broken out into a flow or a separate test that runs prior to this in sequence? Would using API steps be more efficient for your test setup?
  • Does the test account for pre-existing data?
  • Does the test include teardown/clean up steps? For example, are objects created by the test also removed by the test?  Are application settings returned to their default state? Should the teardown steps be broken out into a subsequent test so that cleanup occurs even if the test fails?
  • Should any of the variables be randomized with generated/"fake” values?
  • Would any of the variables benefit from identifying information, such as the test run ID or timestamp?
  • Are flow variables, datatable variables, and test-driven variables commingled?
  • If tests share variables, is the ‘share variables’ toggle enabled/disabled accordingly?

Consider the Overall Structure and Implementation Approach

After re-reviewing the steps individually, step back and consider whether there are any general opportunities to improve:

  • Is this an effective test of the target functionality or user flow?
  • Is this really one test? Could it be broken down into separate tests?
  • Are there other scenarios that we should be testing?
  • If we're working on a branch other than main, have we thought through how the test will be merged to our main branch? Is there a risk that it will fail in some environments that do not yet contain code changes that the test depends on?
  • What happens if the test fails?  Will it leave the application in a state that could lead subsequent runs or other tests to fail?
  • Does this test supersede any existing test(s) that can be turned off or retired?

Create a Culture of Quality with Low-Code Test Automation 

When testing is a shared responsibility, software teams can deliver new features faster and with higher confidence. Low-code test automation is a valuable tool for democratizing quality across the organization, but processes like peer review are essential for making quality engineering and DevOps sustainable for the long-term.

Regardless of where they are in their DevOps adoption journey, every software development team can benefit from formalizing peer review guidelines that make knowledge sharing the norm. These guidelines should provide a solid foundation for helping everyone share their expertise and improve quality practices with low-code testing. 

See how low-code test automation can help your team build a culture of quality with mabl's 2 week free trial