Hi! In this chapter, we're going to look at ways
to create maintainable automated tests.
We're gonna explore some basic practices
for making sure our automated tests give us the value we need
and make us confident that the new changes
that we're committing to our code base
don't break existing behavior
that's already in production.
This requires a simple approach to automation.
But simple doesn't necessarily mean easy.
We're gonna touch on test data as part of this.
It's really important that each automated test
has one clear purpose.
You may have multiple tests per business role,
but you should not test multiple business roles
in the same test.
And you should give your tests descriptive names
so that when a test fails,
you know just from the name of the test
that failed exactly what part of the code is broken.
Really saves you a lot of time in diagnosing failures.
As a tester, I have found it's really hard to curb
my tendency to sniff out the
potentially smelly bits of code that
the developers have built and
go and see if some edge case makes it behave incorrectly.
My teams have sometimes needed to
say, "hey Lisa, could you first make sure
that it does the basic thing we're trying to make it do?
Don't get off into the weeds in the edge cases."
We wanna know, are the business rules implemented correctly
for the basic happy path through that capability.
My own teams have had the best results doing
automation incrementally and iteratively
right along with writing the production code.
So for example, if we're working on a story to build
some piece of an API feature,
I might specify a basic happy path test
for the most straightforward path to that
piece of the feature, that capability.
And, when the developers have written enough code
so that that test can be automated and it passes,
then that becomes part of our automated regression suite
and i start looking for other paths through that capability.
So then I start in with the negative tests,
the boundary conditions, the edge cases,
those all follow one by one.
For example, what's the minimum and maximum input value?
What happens if I pass in a null value?
As each test passes, I can move onto the next one and
so can the developers with their production code.
And again, this requires out whole team approach
for test automation.
So, I'd like you to think about this:
you have this user story.
The billing address form requires a valid postal code
that matches the city for all North American addresses.
So pause the video and think about
what would your first automated UI test for this be?
Might be tempting to start by leaving
the postal code blank, or filling the whole postal code
field up with numbers, see what happens,
cause that might cause a problem,
but it's going to help the team more at first if we
make sure that a valid postal code which
actually matches the city that was entered
That's the thing we want to make sure happens.
And once that's true, then we can start testing the
invalid scenarios and the really crazy ones,
if we want to.
Now, know for the sake of an example, I'm using a
user interface test, this is a type of test that
could be doable at lower levels.
But this is just an example.
Practitioners who are new to automation,
whether they're testers or programmers,
tend to forget to include assertions,
that's something I've noticed.
And if your automated test simply clicks around
and navigates around, say, a user interface,
it's not really testing anything.
The only reason it will fail is if something
catastrophic happens like the page doesn't load,
or the field it's trying to click on disappears.
So we need assertions to assert for what really matters.
Is the, am I on the right page in the UI, for example.
Is, are the right values here
that I expect to see.
Is the application in the correct state.
So in this example,
We're wanting to make sure that our user interface
has some kind of welcome message.
And this just happens to be something
our marketing department might change everyday.
But it should always say the word "welcome".
So if we say it equals an exact
text string, and tomorrow they change it to,
"Welcome! See what's new",
the test will fail, but we don't care about that failure,
because it's only important that it has a
message that says "welcome" in it.
So, use the most specific assertion that you can use
without being overly specific
that just checks the behavior you want to see.
Keep it simple. An old Extreme Programming principle
is do the simplest thing that could possibly work.
Back in the day, I started using really sophisticated
test automation scripting, and I put in a lot of conditionals,
"if this do X,
else do Y."
If a test with logic in it fails,
there's a good that the test itself is buggy.
And if you must use it in your test,
then you're gonna have to test your test.
If your application has rather complicated data models,
it may be laborious to set up the test data, and
there's a temptation to follow one test with another.
it's like saying, "well this test set it up to this point,
and the next test I want to do takes it from that point
and goes forward, so let's just chain these test together."
Resist that temptation.
There are better ways to get your application
into the state you need to start a test scenario.
I once fell down a slippery slope, and I had 25 tests chained together.
And so, often, something would happen in the second test
that didn't cause that test to fail,
but it might change the data or change the state of the application,
and then a few tests later, maybe the 15th test,
that test would fail because
nothing was as expected anymore.
Now I had to go through a bunch of tests
to track down the failure and that took forever.
So avoid that temptation,
don't chain your tests together. Keep them unique,
keep each test with one clear purpose,
and use better ways to set up test data
and the test state for your tests.
Let's look at some test data basics.
Your team can write utilities, story procedures,
API endpoints, scripts, or any
number of other means to populate data into a test environment
and put the application into the state it needs to be
for a given test scenario.
These are things that your team can create tests,
be sure that they're absolutely working and
never worry about again,
and that way each test can be independent.
I often see test failures because
tests are running in parallel and they're using
the same test user login credentials,
and one test maybe deletes some data
that the other test is trying to use.
We want to avoid that.
You need to put some thought into making your tests reliable
so that you have confidence they will only fail
when a change to your product, some new code
that someone has committed, causes a regression failure.
That's what our regression tests are for.
So, a maintainable test is valuable to our team.
It doesn't fail because some other test changed the test data.
it doesn't fail if some unimportant wording on the page changed
that we don't care about.
It verifies a realistic scenario that could happen in production
with production-like data.
We're gonna wrap up this introductory course in the next chapter
with some, I'll review some basic test automation theory.
Lisa reviews the last six chapters worth of practices and principles and prepares you for the next intermediate section of the Test Automation Essentials lessons.Go to Lesson 7