Getting Started with Automated Testing
Getting started with automated software testing can be tedious and confusing, so we’ve put together a guide on how to get your organization going on the path to automation.
Okay, so you’ve decided to start migrating to automated software testing from your old manual processes. You’ve gone through all the positives and negatives and understand the advantages of automation.
Your next question is naturally, “well, how do I do it?”
In this guide, we’ll go through that process so that you can begin to get a feel for what needs to be done.
Create a plan
While it’s certainly possible to transition to automated testing haphazardly, with different testers and developers using different tools on their own, it is definitely not recommended. Spending some time familiarizing oneself with testing tools to determine your path to getting started is not a bad idea, but the best move is to treat the migration to automated testing in the same way you’d treat any other project: systematically, and with a plan before starting.
Identify Testable Cases
Before you begin testing, it’s important to identify what specifically can be tested. One of the easiest ways of doing this is going through a list of your already established manual testing procedures.
Take notes about which procedures are repetitive and could easily be put into a script. For instance, let’s say your site has a registration form with 10 fields, half of which are required. Typically you want to be able to test these scenarios:
Does the form work when everything is entered correctly?
What happens when someone leaves out a field – both required and not required?
If an error is supposed to appear, does it? What happens after the user corrects their error? Do they need to re-enter fields, etc.
How about tabbing through fields? What happens if someone hits “enter” in the middle of processing the form?
For each of these cases, you can set up a test. This can be a routine, such as “click, enter, tab” or “click, enter, click.” In some instances, the procedure is identical apart from which field is selected.
What can be automated?
The very first step is to identify what areas of these tests can and cannot be tested. Not all can or should be. Let’s zero in on the cases that are best suited for testing and take a look at several areas that lend themselves to automation.
Areas that need a lot of testing
The obvious cases that can be automated are the ones that require frequent runs, especially if they need to use a considerable amount of data. These can encompass an entire website, but can be narrowed to individual areas which are most likely to have many parameters, such as shopping carts in ecommerce sites.
Time consuming tasks
Some tasks may not need extensive testing, but if each instance is time consuming, it may be a good area for automation. If it requires, for instance, logging in, saving information, logging out of an application, going in through another browser, logging in, checking information, etc. (you get the idea, any long and tedious processes), these areas could be automated.
Aspects that need to be checked on multiple platforms
If you have parts of your software that will be run on different operating systems, web browsers or platforms, each should be tested in those environments. It’s best to set up several automatic procedures for these areas so you are able to run tests in them across all these different environments each time it is needed.
High Risk/Business Needs
If there are areas that are crucial to business needs, these should be tested rigorously. Any sort of user interaction where gaining contact info or ecommerce activity exists should be built into automation procedures.
If there are areas where failure can cause significant loss to a company, these need to be tested rigorously so that an individual user cannot cause harm. Security tests are a great example to help in this category.
What Testing should stay Manual?
This is a random activity, where a user goes into the site cold and simply tries to find things. This cannot be automated, as it requires testing actual human behavior which is not easily predictable (yet).
Usability testing is its entirely own category and can only work with actual human subjects. By definition, this process cannot be automated.
While some aspects of accessibility can be automated (e.g. how an automatic reader parses text), realistically it needs to be monitored while it is occurring, and should be treated like usability testing and involve well-selected testers.
What actions will tests perform?
Divide tests into discrete pieces
When building a test plan, it’s best to treat individual activities separately. Each test should focus on one objective (e.g. whether a login form works properly).
This is because smaller fragments are both easier to debug and easier to reuse when sharing code, data, and processes.
Create reusable test components
Throughout the entire testing process, you will find that you will be creating similar pieces repetitively. Creating reusable components is par for the course when using a framework, and can be done with the features of automated tools, like mabl’s reusable flows. Save little libraries to call on. When constructing your individual test pieces, they should be resistant to changes in the front-end user interface.
Group test components
After creating a series of individualized tests, each can and should be built into a larger script or scripts so that a thorough test can be performed with a click of a button or triggered through a regular schedule. Build a test tree to specify when and if a test should run.
Test Early and Often - Testing Principles
There are a few basic principles you should keep in mind when testing:
- “Shift left” – move testing earlier into the process.
- The more you test, the more bugs you will find.
- Fix as you go.
- Frequent testing will speed up the entire process by removing the need for a lengthy QC stage near the end and produce a better product.
Choose the Right Tools
Record and play tools
These tools are relatively easy to use but have serious limitations when it comes to reproducing new tests, which is one of the key drivers behind automation. They can, however, be a good way to help you try out the idea of prototyping tests. If one is going to use a record and play tool, it’s important to find one that will let you edit segments of a script.
Testing frameworks, such as Selenium, Appium, or Cucumber, are fantastic for operations that have highly skilled testers. However, to use them requires considerable programming ability. They can be ideal for large sites, and for situations where a tremendous amount of testing will be required. Be aware that the ramping up period for using these is long.
Hybrid tools, such as mabl, share the ease of use of record and play tools with the flexibility of testing frameworks. In addition, they’re able to be mostly code-free, making it easy for people without any programming ability to use them. You can begin by recording full testing sessions and then editing, saving, and extending individual components for future tests.
Determine your test grid
Identify whether you are going to test locally on your own servers, or through a cloud service. Each has its advantages and disadvantages.
Local testing is easier to administer at first. If you already have the hardware, the cost is minimal. One risk of this approach is testing too much in a controlled laboratory-like environment which does not resemble the real world. You may pick up many functional bugs, but may miss issues that come from dealing with internet protocols and excessive traffic.
Cloud testing is harder to maintain, and can be more expensive, particularly if you use cloud platforms which charge per process. However, it’s easier to mimic real world conditions using this approach, and it becomes possible to do some load testing.
You are probably best off using some local testing relatively early in the process, to make sure the basic functional features are taken care of, and then migrating your tests to the cloud once the product is working.
Not everyone has the same set of skills. Traditionally, manual software testing has not required a particularly strong technical background, but being a user is a good attribute to have for understanding how real people interact with software. In automation, there’s typically a lot more complexity in testing, so your automated testers should focus their work on creating test plans and scripts that automation should run through.
Your more technically advanced staff can work on writing up the software and encoding the test cases created by your level 1 staff.
Use Good Data
It’s a good idea to use test data that approximates what real data for the process you’re testing looks like. This forces applications to be tested in real world scenarios (instead of looking for foo and bar everywhere, you can have a person looking for a specific product).
This data, if it is well-structured and organized, can easily be reused and appended if needed.
Migrating to automated testing can seem overwhelming, but it doesn’t need to be. Remember to start slow and to take baby steps. You can create an early plan that simply focuses on automating a few small segments of the manual methods you’ve been using, such as a few quick routines that can be triggered manually. Once you get comfortable with this, you can move forward to further stages of the automating process.
If you take a gradual approach, you will find that your transition to automated testing need not be painful. You’ll also be able to take some satisfaction in your automated victories as you move along toward your larger goal of a more efficient quality control system.