Accessibility and performance are rapidly growing areas of focus for quality engineering teams. Why? If your application isn't accessible, and won't load at all, it doesn't matter if it's functionally correct. To create better experiences for your users, it's critical to integrate non-functional testing into your quality engineering strategy. Juliette and Eva will share tips to integrate non-functional testing into your E2E tests, and share how quality leaders can promote the importance of these activities to their organizations.
Welcome. Hello, everybody, we are very happy to have you at mabl Experience and I'm happy to announce the session, “Achieving Great CX with Non-functional Testing”. Before we get started, just a few housekeeping items to go through. If you have any questions throughout the presentation, please leave those in the Q&A panel on the right side of your screen. For comments and discussions, you can use the chat feature. You'll find both of these on the right side of the session space when the video is minimized. We will leave time at the end of the presentation for questions. And with that, I will hand it off to Juliet. Take it away.
Hello, thank you all so much for joining today, we are really excited to talk to you about achieving great customer experiences with non-functional testing. So before we get started a little bit about us. My name is Juliet MacPhail, and I've been with mabl for about three and a half years now. I'm currently the product manager for our browser and API testing team. And that really focuses on the core of the mobile product or on how you create and run your tests. I've most recently been working on a few efforts around accessibility testing enhancements to our API testing offering. And I'm currently digging into the growth of performance testing, which is very exciting. I'm joined by my wonderful co-worker, Eva.
All right, so there is quite a bit that we're looking to cover today, we're going to start by setting the stage on how software quality assurance ties into the customer experience and why that matters. We'll dig into accessibility testing, the importance of performance testing efficiency, as well as how non-functional testing can fall in your quality strategy, and how you can integrate it into your own end-to-end testing. We'll leave the time at the end as well for any questions you might have for us.
So I first want to provide some context on the relationship between quality and the customer experience, and the role that non-functional testing can play in understanding and improving that experience. So there's a lot happening in the software industry these days. And a term that you'll hear pretty often is transformation. So whether that's digital transformation, DevOps transformation, or technology transformation. And what's one common thread across all of these different initiatives, it's the need for quality. And if you have a need for quality as core to these initiatives, how do we ensure that we're actually delivering on that quality, because the truth is leading in this new reality requires us to move very quickly and deliver superior customer experiences. And we believe the solution is elevating quality by transitioning from quality assurance to quality engineering. And what that means in practice is validating your functional and non-functional attributes, showing quality throughout the entire customer experience, and using data to drive improvement.
We believe that quality engineering is the right mindset, methodology, and framework. But we also acknowledge that quality engineering is a journey and it takes time to mature your quality efforts. So what we'll really be focusing on today is that as you get further into this journey, through all that non-functional testing will play and how you can incorporate that into your own quality assurance strategy.
So what exactly is this relationship that we talked about between quality and the customer experience? Product quality can be directly correlated to customer loyalty and customer satisfaction, you can actually fully mediate the relationship between product quality and customer loyalty. So customers are becoming increasingly loyal to businesses that consistently provide exceptional value without the friction or stress that can drive your customers away. So as organizations continue to go through these transformations, we really want to ensure quality across the entire customer experience. And why do we really care about experience? Well, I feel like the numbers here really say at all.
32% of customers say they will walk away from a brand that they love. 65% of consumers say that a positive experience is more influential than advertising. And 52% of US consumers say they might switch brands for better product quality. So in mabl’s 2022 State of Testing in DevOps Report, we also saw a similar trend across the perceived value of QA professionals and customer satisfaction. Organizations that acknowledge the strategic value of QA, also seek higher customer satisfaction. And 41% of respondents who are valued as highly as software engineers in their org reported amazing or good customer satisfaction.
So when QA is prioritized as a strategic initiative, the business and customers benefit. We are strong believers in the future of quality engineering, a future where quality is not just about functionality, but it's measured by customers across the entire customer experience. We're seeing a trend of teams moving away from traditional QA really focus on functional quality, to embracing quality engineering and a culture of quality, focusing not only on the functional aspects but also non-functional. And as a company that really embraces QE, we believe that it's our responsibility to enable teams to not only run all of these types of tests, but also reuse those flows to incorporate non-functional testing into your quality strategy. And we'll talk more around what that looks like in practice later on. So I'm going to hand it over to Eva to talk about why accessibility testing is so critical.
Thank you very much. Well, maybe you have seen a poll pop up that asks you how many people do you think have a disability in the world. Do you think it's 10% of the population? Do you think it is 15%, 20%, or 25%? Now that number is honestly one of the biggest tools that we have when it comes to convincing people on working in accessibility. What is accessibility? Well, accessibility is the practice of making your web application usable by as many people as possible. So we mean that we want to ensure that people who have a disability or medical condition will be able to use our websites.
That number is 15% of the global population that lives with some sort of disability, that is around 1 billion people. And if you ask me, that is good enough to care about accessibility. But sometimes I have found myself in different rooms talking to different people who don't consider that number high enough to care.
What I want to introduce to you today is the concept of working on accessibility testing and accessibility features that actually impact way more than just that 15%. So when we talk about the 15%, we talk about permanent disabilities. But what if let's take an example, we talk about a person who has a hearing impairment, someone who is deaf or hard of hearing, this person will go into YouTube videos or Netflix or use captions in order to understand a movie or a video. Now, these captions are an accessibility feature that was created just for that.
But what happens if someone has an ear infection? Now an ear infection is something that might last for a couple of days, or maybe a week or two. And during that time, this person won't be able to hear as well as they used to. So they may find themselves using technology in a different way. They might find themselves turning on captions on their YouTube videos and Netflix movies.
So that's when we talk about something temporal that might last a couple of days. But what about something that lasts a couple of minutes? So perhaps in our tube, let's take the example of someone who is working in a very loud, place as a bartender, or let's take the example of someone who goes into public transport and forgets to bring their headphones. And they want to watch a video on YouTube. So they will turn off the sound and basically turn on captions. So they will be able to have a wonderful experience, thanks to captions. And even someone like me whose native language is not English. I do prefer to go into Netflix and turn on captions, just to be sure that I'm getting the right thing and I can enjoy the content more.
Whenever we talk about building for disabilities, bear in mind that we are also working for all the users and every user can benefit from this. We are creating better experiences for everyone. Now if that didn't convince you, here's a quote by the one and only, Sir Tim Berners-Lee, who is the father, the creator of the web. Is that “the power of the web is in its universality” that is why everyone, regardless of disability is an essential aspect. And if you ask me at the end of the day, it doesn't really matter if you worry about accessibility, because it is the right thing to do. Or because perhaps you understand that it is good business to worry about accessibility, or perhaps you are afraid of law enforcement.
The why doesn't matter as long as you care. I want you to care about accessibility testing and to include it in your workflows. So to work on accessibility. And when we also talk about creating a web for everyone, we shouldn't forget that not everybody enjoys a good fast internet connection. And that also, not everyone has these high end devices. So we should also talk about performance - and performance testing.
In the context of digital businesses, what do consumers really care about? Obviously, we do care about functional correctness, you want our sites to work as expected and complete tasks reliably. But this is only one piece of the puzzle here. Speed, availability, performance, and stability are really all key attributes of the user experience that impact the perception of your site.
In fact, performance efficiency is actually one of the most critical operational characteristics of an application that reflects both software quality and ensures positive user experiences. So performance and performance testing provides distinct value across a wide range of applications. As Eva mentioned earlier, first and foremost, it helps provide a more equitable experience for all users, the conditions of a consumer can vary pretty wildly, they might be using assistive technology or accessibility devices that could slow down their experience, and they may not have access to more recent technology. So it's really important that all of our users are supported across a wide range of needs and use cases. In a more practical sense, failing to identify performance regressions can have substantial implications, whether that's financial losses, increased maintenance costs, or user dissatisfaction.
For example, back in 2012, Amazon famously reported that a one-second delay in page load time can decrease overall sales by as much as 1.6 billion annually. And lastly, when it comes to how your team works, and their ability to work effectively, the less time that passes between when a performance regression is introduced and when it's found, the easier it is for us to troubleshoot and resolve. And as we all know, the closer those bugs get to production, the more costly they become.
So with this context of the user experience of performance efficiency, organizations are starting to see the importance of performance testing and driving business outcomes and customer satisfaction. Analysts like Gartner are acknowledging that decreasing performance related issues can have a direct impact on not only customer satisfaction, but on your infrastructure costs. And as a result of these factors, even more organizations are looking to use performance tests. And they're interested in how artificial intelligence and machine learning can improve that experience. So how can you actually start accounting for non-functional aspects of quality, Eva and I are going to walk you through some of the first steps that you can take to get started.
So hopefully, we have convinced you to care about these things. Let's try to think about how we can begin implementing these in our everyday workflows. You might have seen different comments online about how hard it is to test for accessibility. And that is real. So convincing people to care about accessibility is the hardest thing. And the second hardest thing is to actually get it done.
The reason why it is so hard is mainly that the development tools are disconnected from the testing tools that we can use. Sometimes we end up finding issues once is very late once those issues are already in production. And then it creates this never-ending loop where we find an issue, we create a ticket and we add it to the backlog, we find another issue, we create a ticket and we add it to the backlog, and so on and so forth. And suddenly we find ourselves with an endless backlog of accessibility issues that might never get fixed. And then the other reason why this is so hard is that it's hard for teams to collaborate because one thing might have a certain tool and the other team might have another one. So we might find ourselves missing a single source of truth, we don't know what is the current status of our web application.
I invite you to check out mabl’s answer to accessibility testing: accessibility checks integrated in our unified platform. When you create the UI test, you can also add an accessibility check to it without coding anything. And that will let you also define and configure which checks you want to check for that do you want to check for WCAG 2.0? Or do you want to check for WCAG 2.1? Or perhaps you want to check for the whole page, or only for a few elements? Perfect, we have been working on a modal window, and you want to check on that, once you do that, run this test, and you will find reporting.
Now, the reporting and insights tools that we have are extremely valuable, because everybody on the team will be able to see them. And you will be able to find clear trends over time of things going good or bad. So you will be able to find how everything is slowly going down. And that means that you are making your web application more accessible. Or you might suddenly find a spike. And if you find any spike, you will realize that you have a new issue that wasn't there before. So basically, when you run these tests, what you want to do is you want to avoid creating new issues.
And I know this sounds like something obvious. But the reality is that most websites out there have accessibility issues, the numbers prove that only 3.2% of websites actually have no accessibility issues at all. So that is a very low number. So we can have accessibility issues, and we can work on them. And that's what we do. And we need to try to avoid creating new ones. And in order to know, if we are creating new ones, we need to have a baseline. So when you run those tests and have this chart, you will be able to know how many accessibility issues you have right now at this moment. And that will be a baseline. Lesson time: whenever you are working on new features, you know, you will be able to test them. Of course, when testing for accessibility, we are always talking about a combination of manual and automated testing. Why manual testing? Well, because there are some things that we can still optimize, like for example, alternative text for images, we can know if you have included an alternative text for any match or not. But we cannot really know if the content is correct, we cannot really know if there is a grammar error in the content or if the content is biased. So you do need to do a little bit of manual work in order to be sure that your application is accessible.
And the other part, you leave it to us at mabl and we can run automated accessibility testing. And last but not least, you run all these tests, and suddenly you see 200 accessibility issues. What do you do? Well, you need to fix them, you need to stop what you're doing and fix them all. You need to prioritize and in order to prioritize how you always recommend prioritizing on impact and on difficulty. For impact, use common sense. If you go into your application and your application doesn't work correctly with a keyboard, like your tab for your website and the tab doesn't work. That is a high impact. But why? Because keyboard navigation is something that people with motor disabilities or visual impairments will use, they cannot use a mouse, they will end up using a keyboard. So if your keyboard navigation doesn't work, that is a high impact bug.
But what about perhaps the text that you have at the bottom of your screen that has a copyright. If the feed doesn't have good contrasting, it can wait a little bit. So basically try to prioritize first thing with big impact on that are easy to solve then mix it up with things that are a small impact but easy solve idea for Friday's and make it impact the hard to solve to take a little bit more planning, bring that ticket to a sprint planning and talk to to your manager about that, and for the morning, but I have to solve this okay, if you just put it in the backlog is fine. And take care of it whenever you can. So let's see what is gonna happen when we talk about performance testing.
So performance testing provides a unique set of challenges as well. First and foremost, it's not a particularly good candidate for manual testing, simulating large amounts of load it's not possible without substantial person power, therefore automation here is really key. But that in itself presents another set of challenges around tooling, skill sets, and test maintenance. These issues can also be challenging to troubleshoot and fix.
As I mentioned earlier, the more time that passes between the introduction and identification of a performance regression, the harder it becomes to identify the root cause of that issue. And many teams may not actually prepare performance testing requirements. That means that there's no basis or standard for appropriate acceptance criteria. So if you don't understand it's viewed as acceptable performance, it's hard to know whether or not your performance is improving or degrading. And lastly, similar to accessibility testing, unfortunately, performance testing is often viewed as a burden. Many organizations do not address performance until something goes terribly wrong, which can result in reputational damage and financial loss.
So mabl is on this journey as well. If you've been with us for a while, you've probably seen some initial flavors of performance reporting. Mabl automatically collects performance data across both browser and API tests at the micro and macro level. So that includes quickly view any changes in the duration of a plan went over time at the test level, or digging really deep into the performance of a specific test step. And you can easily tailor these views to better understand performance based on the application, the environment, or given date range, and dive into the data to assist with root cause analysis.
We've also released enhancements to help you identify performance regressions at the release level across your browser API tests. And all this happens automatically without any effort on your part. And it helps you better understand, at a glance, key performance changes in your application. So from this view, you can easily identify tests with the most significant slowdowns in performance, and identify any problematic areas of the application when evaluating the health of a release. We also surface insights on a number of factors, including page load time and test runtime to quickly inform your team of any anomalies. So each test browser and API both include detailed performance data at the test level, with daily averages and specific test run data points.
Looking into the future, mabl is starting to expand into load testing and the coming quarters. And we're really excited to provide a unified platform that allows you to leverage your existing test to evaluate performance. So here at mabl, we also care about the performance of our application and test performance. We talked a lot about application performance, but test performance is exceptionally important as well, especially as we start shifting our testing further left, shortening feedback cycles is essential. And the longer your test take to run, the longer that feedback cycle becomes. So we've been able to use mabl to help ourselves in these cases as well.
One situation that came up for us earlier this year was the observation that an average runtime for one of our flows had changed considerably. So if you've been here for a while, you may have heard about our release for the unified monitor for Chrome. As part of that release, we added this modal in the app announcing the release. But in order to account for that in our own testing, we had to update our own login flow to dismiss the modal that was present on the page.
A couple months after that release, we ended up removing the modal from the app. But we noticed that there is a relatively big jump in execution time of our login flow. And it turns out that what happened is when we removed that modal, we didn't update the test, which continued to spend time searching for that element, even though it was no longer a part of our application. So we were able to update our test to remove that statement and speed up the execution of this low which at the time was used across 150 of our tests.
Another example, thinking more about performance testing, an app load time we observed recently after we released some of our more enhanced performance reporting. So with this performance reporting, we were able to identify a regression on our login page. From the test level, it was pretty clear that something had changed in early September that was causing our own app to run more slowly. So by digging into the test steps itself, we were able to track down the specific step with the most significant, significant anomaly. And our team can use this information to better understand changes that occurred during that time period on that specific page to help us troubleshoot and resolve this regression.
As I mentioned earlier, quality is journey and it takes time to mature your quality engineering practices. Her team might be anywhere on the spectrum, whether that's working to increase functional test coverage or shift your testing further left. Having a foundational level of test coverage and being able to execute those tests at a consistent cadence is critical. So once you have that foundational layer, the benefit of a unified platform like mabl is that you can use what you already have. There's no need to start creating entirely new tests to validate your nonfunctional quality. But instead, you can build upon what you have to gain insight into the performance and accessibility of your application. The other important thing to know is that this journey is constant. This isn't one or we have have a clear finish line. When it comes to accessibility testing and performance testing, it's really about constant improvement, aspiring to improve your application with an understanding that things are going to constantly change. And it's our responsibility to ensure that we're taking steps in the right direction, not introducing regressions along the way, and providing the best experience that we can for all of our customers.
All righty, it's time for questions now. Thank you, Eva and Juliet, please feel free to put your questions in the Q&A panel. Now the time. Okay, let me start with the first question. How can I get started with accessibility testing?
Well, I would say first of all, looking at either an idea of what is the current state of your application, you can, if you have already created a few mabl test, you can add an accessibility a test step to it. So that way, you will begin gathering data on the test cell already existant, and you won’t have to create new ones. So it won't take you too much time. And it will give you a very good overview or here up. And then when it comes to manual testing, the one thing that I always recommend is to begin doing keyword navigation. Given navigation is extremely important, very impactful, and it doesn't take too much time.
Okay, next question. What types of accessibility issues does mabl check for?
We check for WCAG 2.0 and 2.1. Double A and triple A values and also single A.
Next question, is mabl used to automate these accessibility tests? How does it contribute in test coverage?
Let's see. So it helps you cover more part of your application even if it is non-functional, it will let you know that you have that information available. And if whenever you are ready to do improvements, you will know how you're doing.
Can I also add here that we do have an accessibility dashboard in our application as well. And what that does is it aggravates accessibility issues across your entire application. So it's really a great place to better understand accessibility more holistically, and dig into common issues and maybe present across a large number of your tests.
Next question, what does the accessibility test actually look for?
So it looks for different things. Let me know what you have configured. But it wouldn't mostly look for things like lack of automatic texts from the use ARIA labels from use ARIA attributes in general. So it will look for accessibility issues. It will give you not only a report with their list of accessibility should say has found, but also on how to find that accessibility issue on your website. So it will give you the DOM element that has the accessibility issue. And that is extremely useful when you want to fix it.
Can we set up alerts for anomalies in performance test results?
Yeah, so right now mabl is tracking automatically generating insights for page load time and test execution time. So any anomalies there from the baseline that we automatically build. All of those are available within the mobile app. But we also have integrations with Slack and Microsoft Teams if you want to get those notifications outside of mabl. But we're always interested in making more improvements. So if you have ideas, we would love to hear them.
Thank you. Next question. Do you have control over how many load injectors there are, how the users are disputed, or the health of the load injectors?
This is a very interesting question. So the honest answer is not yet. mabl is currently in the product discovery and development for load testing. So there's a lot of work we're doing there around understanding the parameters you need for fluid configuration, ramp up time how you, you know, configure the number of tests running at a given time. So the answer is not yet, but certainly someday and again, always looking for feedback on what is most relevant to your team.
Okay. What are the samples of actionable feedback will mabl provide when accessibility tests are run?
Yeah. So it will give you the list of the issues, it will give you the severity of the issues. So it will tell you if an issue is critical, moderate or not. And that goes a lot into the low enforcement part. And then if I give you a list of those issues, where are they located in which page and in which element so it gives you a lot of information so you can make your own decision on what to prioritize or where you need to go and find elements in order to fix it.
Yeah, and we're also using axe core on our underlying implementation there. So a lot of those violations will include links to external documentation that outline how you can address those things on your end. Okay. All right.
That was all for today all the time that we had. If you have any questions that we could not get to, we will connect you with our speakers after the conference. Thank you so much for joining and see you at our next session at mabl Experience.