In chapter 5 of test automation essentials,
we're going to talk about prioritizing our test automation.
We're gonna look at how to use risk
as well as value to customers to drive
our decisions on what to automate first
and how to prioritize our automation
thinking about getting started with
the most critical areas of our application.
So pretty much nobody can afford to automate
all the things, all at once.
It's just not possible.
So let's look at some ways to prioritize
the regression tests to be automated.
Risk is a function of the probability that
some negative event will occur
together with the impact it will have if it does occur.
In some business domains,
even when the probability of a problem occurring is tiny,
the impact may be so great
the safety net of an automated regression test for it is needed.
For example, an aircraft, medical software,
financial services, those are things where we cannot afford
to have even the smallest negative event.
Even the smallest probability.
In other domains, we could live with some low impact
issues, even if they're fairly likely to occur.
So, a typical way teams do risk analysis
is to use a scale of, say, one to five,
where five is the highest risk and the highest impact.
So you can plot that on a graph, and
start thinking about, "what are the areas of highest risk
where you wanna automate the tests",
depending on your application and your domain.
Value to customers plays a part in the impact.
If you have a feature that, maybe it scored low,
both on probability and impact
so it doesn't seem high risk,
but is a customer delighter that
differentiates your product from others,
so you may want to make sure that it doesn't get
broken by a new change.
For example, maybe your product's user interface
has some clever animations that users just enjoy.
If they stop working in IE11,
no functionality may be lost, but your IE11 users
are a little less cheerful that day,
so you may want to guard against that
with an automated regression test.
So, practice doing risk analysis with your teammates and
pause the video for a few minutes.
Think about a feature that your team has recently released.
What would happen if this feature stopped working
due to some other change in the code that you release later?
Think about the potential failures.
Where would you plot them in that graph
that we were looking at?
Get in the habit of thinking about risk
and thinking about value to customers.
That will help you focus your automation efforts
where you'll get the most benefit for your investment,
and since it's hard to get started with automation,
you wanna see those payoffs as soon as possible.
So, I'm gonna just walk through an example,
let's say we have a feature that gives us the ability
for a user to enter their billing address
to reserve a hotel room.
And we'll think about all the things that can happen
while they're doing that.
Well, format exploits are a common thing that happens today,
security's always a concern.
And maybe we think that's not very likely to happen
and we think we've protected against it pretty well,
but if they are successful,
that's a really bad security risk.
So, we're gonna put that in the high risk area and
make sure we have an automated test,
or more than one automated test,
to guard against cross-site scripting, for example.
And it's not likely our phone number validation will break
cause it's pretty isolated in our code,
we feel pretty good about it, but
still, the impact of somebody booking with a
phone number that's invalid,
that's gonna cause problems, because
if we need to get a hold of that person, we won't be able to.
Now, looking at postal code,
it happens that we're using a third party tool to validate
postal codes in our app, in our pretend application.
And we have the rest of the billing address,
so we could figure that out,
so we're just not gonna worry
about automating a test for that at all.
So we may do the phone number validation test
and we're for sure gonna do the cross-site scripting test.
Some other considerations besides risk
A team I worked on decided we would cover our legacy code base,
which was quite large, it had a lot of critical functionality,
with regression tests only at the UI layer,
and that we would build all our new features going forward
in a new layered architecture that was designed
for testability so that we could
build it according to our test automation pyramid,
with most of the tests at the unit level,
a good percentage of the tests at the
API or service level and then not so many tests at
the UI level.
And, over a few years, we were able to build out
a normal looking test automation pyramid-
type model in the new code with our tests.
You may have some area of your product that
seems to just be buggy, and
you continually have regression failures in production,
in that part of the code.
They're not serious, they don't impact the customers a lot.
They're just super annoying and kind of a drain on your time.
So that might be a good place to start with test automation.
Especially if it's fairly easy to automate
the tests in that part of the code.
For example, if you can automate them at the unit level.
Some teams dip their toe into automation
and learning how to automate tests with
something they call defect driven development,
so each time a bug is reported,
the developers write a failing regression test,
hopefully at the unit level,
to reproduce that problem that's been reported.
And then they fix the code so that the test passes.
Now they commit both the automated regression test
and the fix for the problem.
So that's really a good way to get started building your automation skills.
And it also can help developers learn to do test driven development,
which is a really great practice for writing robust and maintainable code.
choose good priorities for your team and your context.
And, again, get the whole team together, including
the business stakeholders,
to talk about where you want to start with automation,
or if you have some test automation already,
where do you want to go next?
Consider the risk and the value to customers,
and look for those quick wins and low hanging fruit.
You want to get value right away because
it's gonna be painful at first to do test automation
so it's really great if you immediately see some benefits.
Next step, in our next chapter,
we're gonna look at some basics for
building maintainable tests.
Find out what to focus on to create maintainable tests, the best practices for making sure your tests are effective and easily understandable and what to avoid doing that can make your automated tests detrimental to your testing.Go to Lesson 6