Hi, welcome to chapter 8.
Let's talk about how your team can get confidence with your automated regression tests.
In this chapter, we'll look at what makes tests valuable,
checking the right things, checking different quality attributes,
and using assertions to get valuable feedback.
When we write automated regression tests,
we're trying to teach a machine how to execute repetitive checks.
We have to be specific about each item that we want our test to check.
Now when you're doing manual testing, you have a lot of tacit knowledge about
your domain and your application that maybe nobody spelled out for you
but when we're automating a regression test, we have to spell it out for the machine.
This applies at all levels of the test automation pyramid that we talked about earlier
Well, we're going to look at some examples, testing through the UI level.
So, pause the video for a minute and think about test cases
that have provided a value for you and your team.
Whether they're manual or automated.
Valuable test cases assert the desired behavior,
the expected state, the presence of elements,
without being so detailed and prescriptive
that unimportant changes, such as exact wording,
could cause a failure and waste time in analyzing that failure.
Well let's look at some of the things that our test can check.
We want to make conscious decisions about the different quality attributes of our application and
how much we want to do regression testing on them.
Is the copy text on the page appropriate?
Do the colors match the product's brand strategy?
Can the user clearly understand everything on the page?
Some of these quality attributes may not have explicit requirements,
but you still don't want them to have a regression failure and not look correct
because of some new change to the application.
Now, in other cases, it might not be that important, for example having exact wording of text
as long as there are certain key words or terms in the text.
There is a tradeoff there.
Our designers put a lot of effort into user research and user experience.
Let's make sure those standards are upheld.
Again, this is dependent on the product, the domain,
how key the user experience is,
how much detail is needed.
But these are things to think about as you're designing your automated tests.
When we think of regression testing, we usually think about functionality first.
But that still covers a lot of different quality attributes.
Can customers find their way around in the user interface?
Do they see all the tooltips they're used to?
In a user interface, these small things can easily have regression failures
when you change other things.
And that could cause big usability problems, so we want to guard against it.
Customers expect a lot from user interfaces. They expect a lot of help.
When we look at the arrangement of the page, it's the workflow's sequence correctness we navigate through multiple pages.
Are there alternative paths that we may need to keep supporting?
Those customers may go through different paths to achieve what they want.
Sometimes there are undocumented features, they use paths we didn't expect them to use, and so do we want to safeguard all of those and make sure they don't get broken?
We want to make sure the important existing ones continue to work.
Automation alone won't verify that our application meets all the goals of our team, our customers, our company.
But we want to stay confident that our product continues
to serve customer needs as intended, as they expect,
while we're adding new features for our customers.
I encourage you to take a look at the Google RAIL performance standards.
Automation helps hugely with checking performance
and detecting slowdowns, including in the user interface.
The RAIL standards include goals, key performance metrics related to user experience.
Human perception is fairly constant,
so these goals are unlikely to change anytime soon.
There are also guidelines, recommendations that help you achieve the goals.
Those might be specific to current hardware and network connection conditions,
so those may change over time, you want to make sure you're keeping up with the current standards.
Asserting to confirm that the application is up to snuff
in all these different quality attributes, including design,
visuals, performance, behavior,
correct capabilities for our customers,
that can all give you confidence that each new change to the product
gives some value to the customer without breaking anything they're already using.
Automated tests provide this feedback quickly, which is what we need to succeed with continuous delivery.
In our next chapter, we're going to look at how we can balance risk and maintainability
with assertions and conditional logic.
In the next lesson, learn how to get the most out of your assertions, how to make your tests's feedback and maintainability better, and why conditional logic can be both helpful and more work and how to use it effectively.Go to Lesson 9