There are many types of software testing types, with some being more beneficial to your organization than others. Let's explore the Pros and Cons of automated software testing types and find the best one for you.
When you think of software testing, you may have a loose idea of what it means. However, there are many different types of testing that are part of software testing that encompass ones that lend themselves to automation and others that are less than ideal fits.
This article will help provide a basic overview of some of the main types of software testing and their purposes. It is important to note that testing has become quite sophisticated and the variation of its different forms has grown dramatically over the years; too many, in fact, to cover in one article in any detail. So in this article, we’ll break down the most common types of testing into groups of testing types that fit together.
Before we begin, it is important to note that most of these testing types should not be treated individually; they work best as a segment of a whole. Ideally you would want to perform as many of these as possible, but any testing at all is better than none.
Software testing in general can be broken up into two broad categories, “functional” and “non-functional” types of testing. The former better lends itself to automation, so we will spend most of our time on functional types.
At a very high level, functional testing is a test of business requirements regarding what the software should do and what it should not do. It is focused specifically on how the software works, and whether it meets the given needs in a functional way.
In most cases, functional testing focuses on output and not the inner workings of the software. However, there are exceptions, at least in terms of the minutiae for each testing type.
Unit testing occurs at the software level. It is generally performed by developers for testing individual methods and functions within the software. The tests typically involve a set of preset commands which mimic normal behavior in an application and are fed a set of dummy data to make sure that they process correctly.
They are typically automated and written directly into the build process. This ensures that any new code created by developers is tested before it goes into production (or sometimes even into staging) so that it does not break other code.
The advantages of unit testing include the fact that it is inexpensive, easy to run and easy to automate. Tests of whether a function produces a result can be verified with simple functions. One of its weaknesses lies in the fact that unit testing is not particularly comprehensive and only as good as the test data being used.
Integration testing is designed to ensure different parts of an application work well with each other. This process includes connection to databases, ensuring microservices are firing properly, and/or testing that any APIs are handling information correctly.
To run these tests, multiple parts of the application must be up and running. As a result, running integration testing can be somewhat expensive.
On the positive side, much of this process can be automated. It is important to note that these tests run the risk of providing false positives. While integration tests validate whether things are firing properly, they are not designed to identify whether the results obtained are accurate, just that there are results at all.
System testing, true to its name, tests the system as a whole. It is typically done after integration testing but before acceptance testing.
System tests are broader than those generally run during integration; they look at the entire system to see whether it runs and meets all requirements. These may include: Can the servers run? Do the databases run? Does everything work as a whole? This is close to a final test, and typically done before an end-to-end test.
End-to-end tests are tests of the full application. In an end-to-end test, you're trying to replicate user behavior by following user paths. Some examples of this include signing up for an account or making a purchase on a site.
These tests can be somewhat expensive to run manually, so automating them at least at some level can be helpful. It is almost always a good idea to run at least a few end-to-end tests as they give you insight into the user experience.
Smoke testing consists of a group of tests that verify if functionalities related to a specific build work at all, and whether they work as expected or not.
These are usually done early in testing, however, they are also often used throughout application development. They are not extensive and consist of surface level testing designed to identify whether a product is ready for further testing.
Smoke tests are basically tests to see if a system “runs” and does not “catch fire” (or smoke) when running basic functionality. Smoke tests should cover major functionality within the software, but not go in depth. Their limited nature makes them a poor substitute for end-to-end testing.
Regression testing is designed to reduce unintended consequences of builds or bug fixes. They exist to ensure that new features don’t break old features. These typically require automation due to how much time might be involved in running over every feature. If built correctly, they can be run whenever new builds are deployed, and, at the very least, end-to-end tests should include some regression testing.
Sanity testing is used when there is insufficient time to run all tests. A good sanity test will touch each implementation to ensure it at least works as designed in an ideal-typical scenario. Sanity tests are by definition not extensive, but they are better than no testing at all.
Beyond functional testing, there are a number of other types of tests that are commonly run on software. These are not tests of how the software functions in and of itself, but are more closely related to user-input. In other words, these involve the human element, and therefore do not lend themselves to much automation.
Acceptance testing is a type of testing typically done by the client to ensure software meets business requirements. These go beyond simple functionality of components within themselves and are more tied to specific business goals. The tester will work from a script of a list of things that they want the software to accomplish, typically written up in the project requirements before the project was started.
In some ways, these are much like functional testing, but the difference is this type of test is conducted by the client, not the developers. Once a client is satisfied with the functionality of the product, they sign off and the product can be released.
Performance testing is a test of how well the software behaves in real world environments. This category of testing includes load testing, which is a test of whether it can handle large amounts of data, and various types of stress testing, such as whether it can handle many users or transactions at one time. These generally cannot be tested manually, and must be performed using multiple servers. They are also generally based on real-world situations.
Usability testing is a test of the software as it appears to an actual user. In these cases, individuals are assessed by their understanding of the software and whether they are able to meet basic business needs. This is a crucial part of testing which almost by definition cannot be automated; an actual person, who is not familiar with the product itself, must do the testing. It should generally be performed throughout the process of development, as well as after release. As a result, it can become time-consuming and expensive. However, ignoring it comes with some peril; regardless of how functional a site is, if people can’t figure it out, they won’t use it.
Compliance testing is another form of testing for which it is difficult to perform automated tests. The tests involve making sure that a site is accessible to those who are covered on the Americans with Disabilities Act (ADA) or various other regulations in other countries. This means making sure that correct referential tags are used and can be read by someone who needs an audio reader. An ideal candidate for a test user in this case will typically be someone with a specified disability.
Security testing is a common and highly necessary testing process, which involves someone attempting to actually break into your software and violate procedures. It is essentially trying to forcefully violate any of your existing rules. In this case, testing involves attempting to do things that you don’t want people to actually do, and determining the results so that security can be improved. While not functional, it is an extremely important part of any piece of software that will have an internet component.
This is really only a brief overview of testing types, and not all of the ones listed can be automated, but this should give anyone who is getting involved in software testing an idea of what can and cannot be done through the testing process. The above represents an ideal typical scenario where “perfect testing” (a mythical state, which does not typically exist) can be done, but we’ve also pointed out a few short cuts for specific areas of testing. This should get you going with identifying where to test, and where to automate.
Each testing type has its pros and cons, but each piece also needs to be considered as a discrete part of the whole. Each phase cannot perform all the functions handled by the others, so they should be seen as a part of a process. In an ideal world, every piece should be performed, but as we know, for various reasons and in different scenarios, it is often not possible to complete all of these tests.
Software testing is not a perfect science; it is difficult to imagine every scenario which might cause a product to fail. However, it’s equally important to remember that some testing is always better than none. Learn more about the mabl platform to see how mabl can help your team with automated software testing.