Performance regression testing is a comparative approach that examines how a software application performs across in successive builds. For experts in intelligent test automation, this is done by simulating a variety of usage scenarios, many of which are tests that place the application into adverse performance conditions. Simplistically, performance regression testing provides feedback on how the performance of the application varies according to recent changes in development.

When a tester observes a performance regression in the latest build of an application, such as slower page load times, it’s typically the result of a recent change to that application. Typically, higher page load time is a negative effect on application performance. A tester will confirm this by comparing the performance of the previous build with the current build. A performance regression is especially undesirable if the analysis reveals an unjustifiable deterioration in performance. Of course, any performance improvements are seen as beneficial.

Why teams neglect performance testing

There are several classic reasons why development teams have a tendency to neglect performance testing. Too often, the project manager will decide not to run performance tests because of time constraints. This happens readily in cases when the delivery date is near and the inclusion of (important but untested) last-minute functionality is given higher priority than quality assurance. 

Another problem is that many teams don’t bother to prepare any performance requirements, so there’s no basis for appropriate acceptance criteria. Commonly, performance regression testing is seen as a burden that comes with each major build: configuring and deploying to a test environment, executing the load test(s), and then analyzing the results.

For any company with a growing user base, it becomes imperative to take  performance testing seriously. Indeed, it’s vital to perform load testing on each application build to avoid encountering future performance regressions.

The importance of performance regression testing

Due to the increasing complexity of software applications and the ability to manage a much larger volume of users, the challenge of maintaining an adequate performance level is becoming very acute for many organizations. Typically, a performance regression has negative consequences, such as maintenance cost increases and user dissatisfaction (which eventually affects profitability). 

Regressions can sometimes lead to non-trivial financial loss. For example, years ago Amazon famously reported that a one-second delay in page load-time can decrease overall sales by as much as $1.6 billion annually. Since then, much research has been done to focus on the root causes of performance regressions in software applications. Additional research has shown substantial value of development team efforts to detect regressions primarily by analyzing operational data such as performance logs and counters.

Push performance testing upstream

Many teams encounter the wonderful concepts of continuous integration and continuous testing, but then surprisingly restrict the focus of the CI/CD initiative to unit and functional testing only. Performance tests get put in cold storage until the next build is nearing release. The thinking for these teams is that it’s necessary to wait for functional testing to reach a point at which the level of quality is high enough. Only then does it make sense to do any performance testing. It need not be so!

Yes, it requires some effort, but pulling performance tests upstream and running them frequently can bring about a dramatic increase in the value of those tests. Many teams have taken this approach and realize that they often find the stickier, messier bugs earlier in the cycle. In addition, performance tests can also serve as a substantial measure of augmentation to any suite of functional tests.

Performance tests should be automated

In nearly all cases, a development team will find it necessary to automate performance testing. This is necessary because it’s extremely difficult to simulate heavy loads or activity volumes with manual testing methods. Think about it: pressing the Submit button thousands of times with the mouse clicker is far more tedious and much less repeatable than submitting the same transaction by means of automation.

The other benefit to automation is that performance tests can run at any time of day, including weekends and holidays. The result is a high degree of flexibility that permits the QA team to execute tests overnight that can verify any changes that have been made late in the day. The results can be ready well before the testers arrive the next day.

Scale up performance testing

If your team doesn’t do performance testing, now is the time to begin. Perhaps you can only afford to build some basic performance tests, but these will provide major benefits—especially when they are run every day. Get going easily with a single transaction, then parameterize the test to accept an array of test data and inputs. Then, scale that transaction upward with a free tool such as JMeter, or by starting a free trial with mabl. Add more transactions—one at a time—until you accumulate a good sampling of the most important transactions in your system. 

The latest performance testing tools are much easier to use than older tools, and most of those support features that were previously tedious to use—assertions for system response validation, parameterization, and distributed load generation.

Conclusion

It’s quite challenging in itself to troubleshoot and fix performance issues. The burden increases substantially if someone has to sift through weeks of code changes to find a root cause. Finding performance regressions by automating performance tests is not only efficient—it can be liberating. Some say it’s priceless. By closing the gap between the time a performance issue arises and the time it’s found, the troubleshooting process becomes simple and much easier. Another happy result is that a team gets more time to improve the overall product quality. Because the result is a robust mixture of coverage, flexibility, and efficacy, it usually is best to automate performance tests for daily execution.