Scaling E2E Test Coverage in the Era of DevOps

Over 76% of organizations are undergoing DevOps adoption, speeding up throughput and release frequency. At the same time, expectations for great customer experiences are increasing, with 32% of users leaving brands after their first poor interaction. This puts a spotlight on testers: how do we deliver quality software while moving at the speed of DevOps?

We need to take a more user-centric approach to test creation and optimization of application quality. In this on-demand webinar, Andrew Horgan at mabl walks through the challenges of traditional test creation, shares strategies to scale E2E testing, and shows how to integrate E2E tests into your pipeline with mabl to ultimately grow test coverage.

TRANSCRIPT

Joe Colantonio
Did you know that over 76% of organizations are undergoing DevOps adoption, speeding up throughput and releasing frequently? If you're on this webinar, you probably have experienced yourself at the same time as you probably know your customer's experience or their expectations are increasing. A recent survey said 32% of users leave brands after having their first poor interaction.

So it's really critical you get this right. This puts a spotlight on testers. Now, how do we deliver quality software while moving at the speed of DevOps? And that's what this training is all about. Andrew Hogan at mabl is going to walk through some challenges from traditional test creation. Some awesome strategies, I'm going to share around scaling end-to-end testing, I'm going to show how to integrate end-to-end testing into your pipelines using mabl as an example to help grow your test coverage. After this training, you should be able to know you'll discover why you should embrace quality at the user experience level, which is critical. The power of user-centric end-to-end testing is how to give developers fast feedback reports on overall coverage and quality and methods for integrating your test further into development.

Joining us today we have mabl. I first heard of mabl on my podcast, way back in 2017. That was episode 182. It was way before they even were beta; they weren't even live yet. I've got a lot of questions since then. I'm really excited that mabl agreed to do this free training with us today. Joining us we have Andrew. Andrew is a solutions engineer at mabl working with customers to identify pain points in their tests and finding opportunities to leverage mabl to improve the process. He has a deep background as an IT specialist working with Python scripting to automate manual workflows and Selenium testing. Alright Andrew we are about to go live. Let's do this!
Hey, Andrew, welcome to The Guild!

Andrew Horgan
Oh, Hi Joe! Thank you so much for having me. I really appreciate everyone taking some time to join the session today.

Joe Colantonio
Absolutely. We have some folks saying hi. Dan says Hi, Jennifer, Chiara. Thank you, everyone, for your input. All right, Andrew, it's all you.

Andrew Horgan
Great. Yeah, let's jump right into things. As Joe mentioned, the focus today is going to be examining, how has this transition to DevOps affected testing? What are some of the unique challenges we see in that? What are the ways tools like mabl can aim to solve some of those challenges? But for some background, nice to meet you all, my name is Andy. You know, I'm a solutions engineer at mabl. I work hand in hand with folks like yourself, prospects, and customers, all trying to solve these challenges. Along the way, I've learned some pretty unique different ways of tackling those problems. I'm hoping to share all of those with you all today.

But for today, what we are going to kind of focus on is what we're going to look at the trends in the industry as far as this digital transformation. Then how does a tool like mabl, a low code, SaaS native solution, really help drive higher test coverage within our release pipelines? In the same way, we're going to talk about how we can scale and then test throughout that entire DevOps cycle, as early as local development through regular regression testing in production. Of course, we're going to summarize all of this through key takeaways. I definitely want to leave plenty of time for questions and get into the nitty-gritty of what is most relevant to you all.

But to get started, let's set the stage as far as what is the current state of testing with this new DevOps culture? Well, what's happening in our industry is, of course, the rate at which we are developing and releasing applications and software has been accelerated. So this digital transformation is part of the result of the adoption of DevOps and all of our release processes. Because its rate of change is so high, there's a pretty unique impact on things like customer experience. With the constant shift of applications and new features coming to market, customer experience has never been more important and the ability to test that user-centric flow has never been more important.

We see that when we kind of look at some interesting numbers from surveys where, you know, when customers are considering what kind of a product to buy, the user experience is paramount. We can look at things like the rate of retention of customers if there's a single bad experience. I know I'm not alone when you may be on a ticketing site and you just can't seem to kind of proceed with your order. Next thing you're moving to the next vendor over. With competition so steep, the ability to deliver a quality product is absolutely critical. A really kind of unique factoid about employees themselves is employee retention at organizations; employees may consider or will leave their current positions if the software they use for their own work has issues. It's not just isolated to customer-facing apps, business to business, really across a spectrum.

Ensuring quality or customer experiences has never been more crucial. And how do we kind of get to that level of customer happiness? Well, it is kind of rooted in this idea of test coverage. The complexity of applications has grown significantly as has the rate of change. So how do we ensure quality across these vastly large applications with integrations into multiple systems? Well, that's driven by test coverage. The higher we're able to drive up that test coverage metric as we start to poll and you perform these NPS surveys, we start to see a dramatic shift in that sort of happiness paradigm. So even though it sounds pretty intuitive, higher test coverage means better applications, and better applications mean, happier customers.

How do we get to this test coverage is a challenge in itself? We’ll talk all about that as we kind of dive into the platform. But DevOps is not a light switch we turn on. Right. I think many of us are in this sort of ebb and flow, this striding or aspiring state of wanting to be more DevOps. So, maybe as you're starting small, and you're automating some of those workflows with GitHub or Jenkins, and you're doing a weekly cadence. Some of you may be fully DevOps, and you're committing directly to production, good for you. But it is also very much a transformation that takes time. We want to make sure that we have software testing that can aid in that journey, and not become a barrier or blocker, to that aspiration towards DevOps. Because this is the picture we want. We want high velocity and high throughput. We can release features and updates to our application as fast as we can develop them. All the while QA is wide open and able to validate and ensure quality throughout that entire throughput. But this is where we see a pretty common challenge.

QA becomes a bottleneck because we've had all these great tools developed to improve the rate of change for development. From CI/CD pipelines to get to all these different types of tools to help enable development work to increase productivity but the traditional tools designed or built for QA are designed for that rate of throughput, or even the application complexity that we deal with today. What might have been an end-to-end test 20 years ago, may consist of say, logging in, checking a balance, and logging out. Well, how does that same traditional tool work when your end-to-end test case is not just logging in and checking a balance? It's logging in and checking a balance for that balance to a customer who then receives an email, opens that balance in a PDF, and validates via some API call that it is the correct value in the back end.

The complexity of an end-to-end flow and the complexity of trying to keep up with the rate of change in development has caused this sort of bottleneck in QA. As all of us are well aware that is where we want to improve our process and ship testing labs and widen that bottleneck. This is where a tool like mabl starts to come into play. When you're why use a solution that is low-code, and how does that drive higher test coverage? Low-code test automation allows us to be more user-centric when designing those end-to-end tests. To think back to that example, I was just talking about, when we're looking at testing the full entirety of a user flow. It's not just logging in and checking its balance. It's making sure that they can receive emails and open PDFs. At the same time verifying something in the backend, maybe via API. So really encapsulating that full end-to-end journey from the user's perspective, not only drives you to better quality, but it allows us to expand test coverage in a very approachable way that is more efficient as we're scaling up test coverage.

So why low-code? Mabl as a low code solution is powered by intelligence. It's a SaaS native, cloud-based testing solution. So it doesn't have all these factors, we can enable true collaboration and quality. We like to say we democratize testing. So that the folks that may have the most domain knowledge of an application, perhaps business analysts or manual testers. Well, not only are they able to continue to ensure quality, but now they can contribute to automation because that barrier of entry to automate is suddenly much more approachable. Mabl is a SaaS native solution that unlocks a lot of granular insights into application quality. This is everything from really rich diagnostic data to insights around performance metrics, and all sorts of really interesting things. And then finally, the way we're able to integrate with your pipelines means we're very easily able to ensure quality by running our tests, right alongside each of those deployment events through native integrations, and tools like our CI runner in local executions. But the core of how low code enables teams to generate test coverage more quickly, and greater test coverage was the low code experience.

If we look at this slide here, our implementation of the simple step of clicking a link is fairly simple. As far as we simply record that click stop, and our step is generated. But the implementation is that technical barrier that not only has a higher means to entry to be able to perform that more technical scripting. But it also means that sometimes, we're more confined in certain ways. We'll dive much more into this low-code experience as we start to work with the trainer. But the key takeaway here is we want to be able to separate the implementation from the user experience in our intent test. I had mentioned that collaboration and this is really from that point of view of everyone across the organization is not only able to contribute to automation in a tool like mabl. They're able to get visibility into the type of insights for performing and remove the burden of maintaining infrastructure, or maintenance through the intelligence of auto-healing our self-scaling, paralyzation, and cross-browser capabilities. So this lets us stay focused not on the implementation of the tests themselves, but on interpreting those results, and being able to expand test coverage to places where we weren't necessarily testing before.

As those granular insights are unlocked by a SaaS native solution, we can get analytical in the results we review. Take a look at our release coverage based on real stats, like how many tests are we running in a given sprint? Are we running our sort of target number of tasks within that before we feel confident in promoting a build? And then all of that rich diagnostic data I alluded to, and I'll show later. Things like network areas, DOM snapshots, and step traces, all are consolidated for each step of every test so that when we're investigating results, we can be much more efficient with our time.

That's it for these initial slides. I think, what the folks want to see is how do I use the tool? Can you show me a little bit of this? Let's jump right into mabl as a solution. I'd like to start by showing the dashboard. You'll see when you log in, what is the user experience like? So this is the mabl and mabl workspace. We use mabl to test itself. Mabl is a react app. The reason I bring this up is it really kind of drives the value of a couple of core values, core tenants of mabl. Mabl is a unified solution. That means we're not just doing UI browser testing. We can run tests with headless API testing integrated from Postman. We can run accessibility testing and get performance insights and visual change detection.

So again, the user-centric approach to testing the whole of the user experience. We see that clearly in this one-hour snapshot. All these tests stacked up on each other are the result of mabl as a solution, not having any limits on paralyzation. So often we see customers being able to take their regressions for you, which previously might have run over multiple days, being able to be performed in mere hours, because they're scaling all of those tests in parallel, with no real limitation. So that just means faster feedback for the entire team.

As we also start to think about scalability, as I look at these different cards, they each represent a different environment configured to this workspace. And so with mabl, not only can I quickly drill down into the sort of the insights about the test run history for my dev environment and top failure reasons. But as we start to build out our test repository, we'll organize those into plans we may consider a test suite. And those plans can be easily scaled across all of our different environments like dev or production, staging, whatever it might be. Very portable and organized for these tests. And this really will tie back into that earlier notion of integrating our tests within the entire release pipeline. And we will talk at length about what type of testing and am I performing at each stage of that development process.

Joe Colantonio
Someone just had an interesting comment. Demetrius said low-code test automation is only for teams with low-code knowledge. So when you're showing care, how hard was it for a team to get up to speed with setting this dashboard up or setting people up in their system to return all this information you're showing?

Andrew Horgan
Yeah, that's a great question. So another benefit of a SaaS solution is all this dashboard you see here is built out of the box. From an implementation standpoint, to get started with mabl, all I need is to download our electron desktop app. Now I have everything I need to start creating tests, running those in the cloud, getting all of that great insight, which that data will compile into these dashboards, and start interpreting the test results. So from that sort of macro lens, it is very low-code, in terms of built understanding dashboards and things of that nature. And the test author itself, of course, is also low code as well. And I'll show you all of that in just a moment.

Joe Colantonio
Cool.

Andrew Horgan
Well, yeah, why don't we jump right into it then. I'll go ahead and start by creating a new test here. So we can see I can, again, create a browser test. We can do that standalone API testing, and I'll go over that in a moment as well. But let's start with the browser test, which of course I'll give a name. I can give a description and labels. Labels are like tags; it lets me be organized and maybe filter results by tests with the label smoke. I'll train this from a desktop perspective. But we can also run our tests against emulated mabl web. So a lot of our customers will opt to have a solution that, again, is unified. And we can find a lot of efficiency by having a single solution to test both of those user experiences. I'll add this to an existing plan or test suite. And then the final thing is I'll add a data table to the test. A data table is a spreadsheet of data - that could be a .CSV that we import into the workspace, but it allows me to train a test with one data set, but then execute that test across all of those data sets in parallel. So again, we’re building data-driven testing and think about expanding test coverage and being able to run multiple permutations with a single test set skeleton, if you will, is an efficient way to do so.

When we talk about what is the benefit of low-code, we can start to demonstrate that with the mabl trainer that we see here. So my app just spun up a new browser instance, and our trainer is off to the right. The trainer is what separates that sort of technical challenge of implementing an underlying test script. So rather, while this is recording, any action I take, like clicking a button within my application, we'll see that automatically generates a step in our test. Likewise, we may want to validate some expected state or behavior. In this case, if I tried to log in without entering an email or password, I should get some sort of error tooltip. So we can go ahead and assert against this new element easily in this sort of point-and-click manner. I can be very flexible. I can change the type of logic I want to apply, perhaps just contains, or maybe choose a different attribute or property. For instance, the class should just equal error. We could keep it simple, or strict like the inner text is equal, this cannot be blank. But you know, with a couple of clicks, we've already started to include some negative testing into this flow. Now speaking of flows, a really important concept in mabl is our notion of flows, which simply is a reusable set of steps that I can repurpose throughout my different tests. So in this case, I have a login flow. It could be creating some records, it could be updating something. It could be very complex or very simple. But as teams get acclimated to mabl, they find they have this large repository of all these microflows that can be stacked together like Lego pieces. What this means is when I go to create a new test, rather than redefining these common functions, I can simply import all the appropriate flows. And I can start authoring and generating new test coverage rapidly. It also means from a maintenance perspective, if I have any change to this flow, rather than update all 17 or 100 tests individually, I only need to update this in one place.

Joe Colantonio
Andrew, does it alert you if you try to create an existing action that's already in the system? You had a login, and you had another sprint team that's creating a login that's already there. It does a prompt to say, hey, we have a login, would you like to use this one?

Andrew Horgan
No, it doesn't show but that's really good feedback and sort of point perhaps for our product team. But no, as you said there are certainly best practices for things like that we like to use naming conventions, what is the feature that this flow touches, and then what does it do? So, there are all sorts of internal ways to kind of be more efficient with that.

Joe Colantonio
Perfect.

Andrew Horgan
As I'm going through this end-to-end journey, maybe I'm creating a new client. Again, I'm simply walking through the application as a user would. Perhaps I'm entering my first name. But we could also be more strategic with how we are entering or creating test data in our application. And so for instance, I could generate dynamic data with the Faker JS library. So maybe fake dot name, dot last name, we'll see this will generate a new randomized name at runtime every time we execute this test case. So again, we can see it's very simple to start to use that data and sort of point and click.

But data can come from a variety of sources. I mentioned that data table, we can see that represented in our variables menu here. I could also fetch data from an API endpoint. This could be useful if let's say, I've created my new client, I now want to verify if it exists in the back end? Well, maybe I'm making an API request. So outside of standalone API testing, we can embed it in our UI test as well. API testing is also a low-code approach. I can set some endpoint, in this case, if I want to check the weather of mabl HQ and go ahead and click send. And I can start to work with this data easily. I can make assertions based on some JSON body path like name, or outside, he expected to be Boston, which is the case. But again, if I'm using a variable to create that record, let's go ahead and verify that the same value is present in the back end when I make that call. We could extract data for variables, whatever they might be.

Now, outside of data, and that sort of flexibility, one thing I always try to highlight is that one of the things mabl does differently than a lot of traditional tools and solves for one of the big pain points in automation is sort of that flakiness and tests. What we don't see is the background of the steps, but I can show you a little bit more about the amount of data or context mabl has captured as part of these tests. So this is just a sample of about four, but anywhere from 20 to 40, sort of unique identifiers are being captured to locate and target elements to interact with. That includes ancestor and descendant elements.

What this means is, should anything change to our application under test, whether the location changes via XPath or the inner text has changed slightly. Mabl can use intelligence to identify those types of changes and then passively adapt the underlying test script to effectively auto-heal or fix the test for you while you are executing that. When that happens, we generate an auto-healed test. And so in this instance, we see a natural intentional change to the application, such as this menu item having a new intertext. Mabl was able to auto-heal around that. But also just as pertinent, is the ability to auto-heal when we're working with applications that are dynamic in nature. You know, things like React or Salesforce, for example, can be very tricky to automate against. Because how do I maintain the underlying framework of query selectors or page object models? So this is aimed at helping to solve that type of challenge.

As go through the rest of the test here, the final piece to kind of just highlight is the flexibility of what types of steps I can include within a given test. I kept referring to a very complex user flow that maybe goes to my application log in, creates a balance, sends an email, and that email has a PDF, how do we kind of test that type of user flow? If I switch over to a sandbox environment, where I can quickly trigger a PDF or email, we can see mabl by default can traverse multiple applications for those complex flows switching from FreshBooks to Sandbox to doing things like this PDF validations, where mabl can recognize that a PDF has been downloaded process and render that within the browser. And now I can begin to validate this as if it were any other web page. So again, I can make assertions. I can make those assertions, data-driven, just very flexible in the types of end-to-end tests we can automate.

The final piece here is mabl is low-code, not no-code. So there are instances where you may need to do something very specific, like parse a string with REGex, or dynamically generate a date in a certain format for a date picker. So mabl enables teams to be able to use that coding skill set as well. Through JavaScript steps, I can generate a custom code to execute. But then I can parameterize it and make it more reusable for teams. So even though Joe may know how to code, I may not know how to write code. So looking at this kind of JavaScript code, there might be something outside of my wheelhouse. I can still effectively use all of this to be able to generate data simply by adjusting the parameters of our script. So again, it is very easy to start to leverage JavaScript-type coding, but also create a more collaborative environment for using that type of work.

But let's say we've created our end-to-end tasks. The next piece is going to be how we then organize this in a way that's going to drive value for the team. So for that notion of plans or test suites, we can view an example here where we may have a regression suite that creates some client data and validates some actions in our application, like generating a PDF or an email. And then let's remove any sort of test data. As I'm working through this plan, some things to kind of point out, well, I can trigger this test on any sort of set schedule, like Tuesdays or Thursdays, or a timer every 12 hours, I can also trigger it to run on deployment. And so when I'm integrating these plans into our CI/CD tools, we can reference these labels that we apply to these plans. Now what that looks like, is if I'm looking at Jenkins, or GitHub, with my native integration with mabl, we're going to set it up so that on every pull request, we're going to run our subset of mabl plants with the label, regression or smoke.

Again, we can easily scale this across environments. But we can also introduce some order in some logic into how these tests run. So whether those tests run sequentially at first in a separate stage, because maybe they're dependent on each other, or in a second stage where we're just running in parallel, because these are all independent. And we want to drive the most efficiency at runtime. Finally, we can have this last stage show set to always run. This is going to allow us to make sure we're always going back into leaving test data, whether that's the UI to delete the client and the invoice, or even using something like an API call to do a bulk deletion of any sort of test data.

So data within these stages can be passed downstream. They can work in tandem to achieve that full of end-to-end regression functionality. And we can mix and match UI and API tests together to get the most efficiency oout of our tests and expand test coverage, as much as we can. The last piece about scalability that I'll mention is just how we want to run this suite of tests.

So even though we train in Chrome, every test is by default cross-browser. So if I want to run this across all four major browsers. I simply choose the appropriate device settings in the plan. And we're now scaling this across Chrome, Firefox, etc. Because what we hear often from teams is maintaining the actual Selenium Grid and keeping sure that our web drivers are in sync can also be a source of maintenance trouble as well.

Joe Colantonio
A quick question on the screen here, Andrew. So you created a test using the web. Do you have to do anything other than just select mabl to run it on mobile? Do you have to modify that script in any other way?

Andrew Horgan
Yeah, that's a good question. So truthfully, there are some modifications to bear in mind when you are running a test in mabl. And we will kind of notify you like, hey, because of the way mabl responsive design, there is a chance that you'll have to refactor some of these tests. We do make this a little bit easier for you, though. If I have a test, I want to refactor for mobile. Well, I could take this given test, and simply duplicate it so that I can now start to work on this version for mobile. I'll go ahead and make this quick change here, mobile compatible, create a client. I'll simply update the device default to that mobile device. What this allows me to do is first, before even spending any cycles trying to refactor the test. Why don't we just run it headless or run it with the browser and just verify well do I need to make any changes. So in addition to executing in the cloud, where we get all sorts of scalability benefits and rich diagnostics.

We can run tests locally through the desktop app, or a command line interface. You'll see that sometimes things don't quite work the same way as you'd expect, within a mobile responsive counterpart. And so naturally, now I come in and start to make some updates. Again, when I entered this test, we were going to start to see it in that mobile responsive view. So now I can make the appropriate changes, whether that is reselecting my target element to enter, say, my username or my password. So we can see how I can very quickly start to make those updates, as it would be compatible with a mobile device. Now I can start to run all these tests, mabl as well.

Another area that just wanted to highlight, as far as some of the unique benefits to a unified solution, like mabl. Not only is it able to test that mobile compatibility, but we also mentioned some things like accessibility testing. I think what's very interesting is the notion of using APIs as part of our tasks as well. So if I look at this end-to-end API example and I come down to some recent results, we do some pretty interesting things here. Rather than using the UI to generate our test data or tear it down, we can let an API handle all that for us. Then we'll notice that the runtime for these tests is much more efficient. Again, I can connect all of these in a plan so that the data can move downstream. But when I run this test, we're getting a lot more efficient at runtime. This is the type of regression suite that has high coverage of our application. But it runs efficiently because we're using APIs to set up and tear down our data.

Also unique to mabl is its way of authoring API tests. Now, this is really kind of unique. Now we give you an interface that makes it simple to set some endpoints. Perhaps I'm sending a header or some particular body like a JSON. Go ahead and send that API request. We'll start to get that data back just like before. Again, rather than needing to know Postman dot set environment variable XYZ. If I know that the JSON body path is the main dot temp, that's all I need to make an assertion or create a variable. But with Postman in mind, we import those collections directly into mabl, and you can start to run those alongside your UI tests. We export Postman and in a similar low code way, I could use that Postman scripting directly within mabl API tests as well.

Now I want to make sure I have time to cover the full circle, how does this all integrate within my pipelines? What kind of results do I get? So I'll just kind of keep moving along and talk a little bit about running all of these tests. How does the fact that you know mabl is a unified solution make this easier and more efficient to release quality software more quickly? Part of that's driven by this idea of really rich diagnostic data. So for each step of every test, we run in mabl, and we're capturing a lot of data of everything from screenshots, which in this case we can compare to a baseline and see some visual changes occurring within this test. The goal is when interpreting a test, understanding. Did this fail because of a true regression? Was it an environmental issue? Was it a test implementation issue? In this case, I have a very handy error message that tells me something went wrong. But we don't always get that lucky. And sometimes we need to dig a little bit deeper. If I was testing this manually, I might need to recreate the failure state and extract some logs.

But all of that's captured by mabl on that first go around. So everything from network errors, or the DOM snapshot, some performance metrics that can kind of trace this application state throughout this entire test. In this instance, even though we fail on step 22, mabl will tell since we passed with a warning at 21. As I investigate these logs, I see there's a 502 that looks like a bad gateway, as it relates to this particular API request. I've been able to, in a short amount of time, identify this as an environment issue. I know exactly what API is related to this failed test. If I'm integrating with JIRA, when I create this issue with our native integration, and not only am I providing context about the issue, what was the situation that occurred in all of the data that we saw within that test run is going to attach directly within that JIRA ticket. So some customers see a reduction in the meantime to upwards of 80%. That can be powerful for teams when trying to triage and troubleshoot failed tests.

Now, to kind of start to work things back to the final pieces of the demo. We've created tests, we've started to organize them in plans, and we're running them in the cloud. I showed you a little bit about those local runs. But now how do I start to integrate this within my entire DevOps pipeline? Well, you can think about a situation where I'm a developer, I am working on some feature code and VS code. As I'm developing this new feature, I might want to verify before even creating a pull request that I'm not introducing any new changes on my local build. And so I could provide some command, like, let's say, we'll grab this mabl test run. I can provide this within my CLI. Even if you provide a URL override, say, let's run this against my local host environment. So it's a level of testing being performed before actually committing that code. As we kind of move down the pipeline, I may want to integrate mabl as part of my PR process. So through things like GitHub actions, we can run those same types of headless checks within that CI environment directly within GitHub.

As we move further down, now, maybe we start to want to integrate our cloud executions. We get all of that scalability, all those rich diagnostic data. That's where we can start to leverage more of those integrations. For example, when I use Jenkins, and I'm doing a build, I can trigger those mabl tests are run. In Jenkins, I can see exactly what tests ran, what passed, and what failed. Here I'm getting the benefit of all that scalability of the cloud, running tests in parallel, running tests cross-browser. From mabl’s perspective, I can look at all of those results by deployment to get a lot of insight as to what things are happening within my tests.

Knowing we have about 18 minutes left. I think the final piece I'll just talk about is we generated some test coverage. We've embedded it. The last piece is how we start to analyze and test these results. I may look at things like our release coverage page. Again, let's measure the quality of our testing within a sprint over the last 14 days, what is my pass rate? Have I run the full target cumulative test runs that would apply for this given sprint? Then get some high-level metrics around things like test run history, as well as some sort of more outside nonfunctional, sort of testing data like accessibility. With a click, we can start to embed accessibility checks. Then we can start to understand what level of sort of improvement we have been seeing as it relates to WCAG to compliance.

Are we seeing a reduction in critical violations? Are those increasing and if they are occurring some critical violation, let's get some insight as to exactly where in our application are those critical violations happening? So again, that's just where that notion of a unified solution starts to come into play. But to start to kind of summarize all of this kind of double down on sort of what the focus is here, we want to think about not just having an easy way to generate test coverage through a local tool like mabl. But we also want to have a means to effectively scale this within DevOps. This is pretty similar to the workflow I described earlier. Where, in that early development stage for developing code on a local host, we can leverage things like the desktop app and our CLI to run tests, before we even integrate, before we even create those pull requests.

We test within pull requests, deployment,s and while we run in production. So we want to have some means to be able to ensure quality along the way so that throughout each gate, we're catching those bugs as early as possible and reducing the amount of technical debt and the cost to fix those bugs. This is an example of one of our customers, Ritual, which is using mabl throughout their entire pipeline. They're using features like branching to maintain isolated changes to their mabl test, alongside preview environments. Merging those tests alongside as they move new features down the pipeline. Along the way, of course, they're integrating those tests, we run with some of our different integrations. It just goes to show how something like a low-code approach just makes it much more accessible to have a DevOps approach to quality alongside your development.

But as we start to kind of double down on these individual components during the code change, the coding stage, I think efficiency and velocity are key. We want to ensure our most common golden standard; happy path tests can pass successfully. We want to run those efficiently as well. So running headless with our CI runner, or with the CLI, or the desktop app, just ensures that testing isn't bogging down quality or ability to release new features. Then we get to commit before pull request approval. Again, as I started to gather all of these new commits, we want to ensure a level of quality through things like our CI runner embedded within that preview environment. We wanted a little bit more breadth of coverage as far as those tests go. This is also where you're going to want to be ensuring that we're running those tests based on certain branches that we're developing alongside those code changes.

As we move further down that pipeline, as we're moving code, from staging to production, or QA to staging, this is where we want to start to expand the type of tests that we're running. We want more regression testing. We might want to start to include UI and API, as well as running these tests cross-browser. This is where you get all of the great benefits of a SaaS native solution like mabl. If we were to run cross-browser across different environments with our full regression suite in sequence, chances are that's going to bog down your philosophy and deployments. But given that there are no limits to paralyzation we don't have to forego the quality of cross-browser testing, or testing across environments because we have that infrastructure on demand enabled by mabl.

Then finally, in production, I think the goal here, of course, is that if there's a bug in production, we want our test to catch it before a user does. Because as we've talked about, that user experience is so critical to the success of our organizations in our software. Early detection is key through regular smoke tests and ensuring that things like third-party dependencies or integrations are performing as expected. So that's really at the core of what integrating tests in your pipeline is all about.

So as we talked about the key takeaways here, looking just at the numbers when you're able to reduce the amount of maintenance, in your testing efforts with the intelligence of things like auto-healing, what it translates to being able to more effectively scale your test coverage. When we spend less of our time maintaining tests, that time is repurposed into increasing our test coverage, which again, is going to bring about a higher happiness score with our customers. As a result of all of that, we can deploy more frequently, because we're ensuring quality at a much higher rate. We encourage folks to learn more about mabl. What is the benefit of scaling with a unified platform, how do we unlock some of those granular insights with native cloud functionality and customer-centric tests?

Again, just focused on improving test coverage to ensure customer successful outcomes. Creating a platform that emphasizes collaboration through things like low code and having a solution that can be embedded throughout your entire end-to-end release pipeline. From here, if folks want to talk or connect with me or any of our amazing team, you can go to mabl.com. Start up a trial, it's free for 14 days, and you connect with our amazing support team to learn more. We always like to kind of talk through and understand more about your own goals and strategies so that we can line up the right resources to ensure you're getting the most out of that time. With 10 minutes left, I will open it up if there are any questions that I could answer for the group

Joe Colantonio
Awesome, let's do it. Andrew, could you please stop sharing just so you can see a full screen and people can see your awesomeness as you answer these questions. Cool. Cool. So the first question we have is running end-to-end test leads, creating data production test systems, like entries in the database, and analytics events being triggered by running a test scenario. So how do you approach setup and teardown steps with mabl? Is that possible?

Andrew Horgan
Yeah, so Joe, hopefully, what I showed earlier, when looking at end-to-end testing with UI and API tests in a plan helps address that. But at the core, we can handle a lot of that setup and teardown via API, and those are present. That just means that we can be very efficient in handling all of that data. But we've also seen customers doing it directly within the UI. I think there's an argument for both where sometimes you want to test that ability to create some record in the UI, modify it, and then delete it all from that sort of front-end perspective. But then there's also an argument for when maybe I'm just kind of worried about how I can modify it in the UI. Then those instances, use something like API requests to set up and tear down that data. But again, plans are what allow us to get that synergy between UI and API tests kind of running together.

Joe Colantonio
Awesome. Next question, Andrew, can you run across different devices, for example, iPad, Galaxy, tablet, iPad air, Motorola, all the devices on all the things I guess, such as SaaS, I assume? Yes. But I know, it's possible.

Andrew Horgan
Yeah, it's a fair assumption. You know, as I was showing plans similar to how I chose to run a cross-browser if we choose that mobile device section. From the drop-down lists, there are several different device profiles, all of which of the ones that were listed in that question are included, and it's a pretty vast list that we keep pretty up to date with the most modern and typical devices you might test again.

Joe Colantonio
Thanks. So you said this is low-code test automation, not no-code. The question then is what programming language is required for this tool?

Andrew Horgan
Excellent question. When we say programming language required, I say none. You don't need to know a programming language to use mabl. But if you want to use some of that custom functionality with JavaScript. Well, JavaScript would be your answer. That's a component where even though that is customizable, you can write whatever you tend to like with JavaScript as part of your tests. We also maintain a public repo of what we call JavaScript snippets, that are available to the public to use. So doing all sorts of really cool custom functionality, like bringing up context menus, or validating race conditions of toast messages. Again, very common things like date pickers are all handled very well with JavaScript.

Joe Colantonio
So this is along the same lines, I don't know if you have an action for creating custom code that you drag in, but he wants to know, can you make custom actions by just writing code? Or is it just modifying an existing action with JavaScript?

Andrew Horgan
Yeah, Matt, you can. JavaScript isn't necessarily just I need to transform some data with REG x, right. And while that is a pretty common case for why you'd want to use JavaScript, sometimes you may want to perform a click step using JavaScript. So the same way you can use for your selectors, there's an option to find elements and use CSS or XPath. You can do the same with JavaScript to find an element and then perform an action, perhaps you want to modify it, to make it do something else, or create a whole new element on the page. You can do all of that with JavaScript enabled.

Joe Colantonio
Nice. Now, I think you touched on this visual validation testing; I assume you can say you showed PDFs. But can you do visual testing, like appearing images?

Andrew Horgan
Yeah, now we can do some visual testing. It's interesting, there's a lot of passive visual change, and visual testing built right into the platform. So as we're running these tests in the Cloud, it does those comparisons, and we'll be able to detect any visual anomalies. In fact, we apply some machine learning to those models, so that we can get smarter about filtering out visual changes. So if there's content in your application that is dynamic in nature, it will start to ignore visual changes to dynamic components, and only alert you when there are changes to those static components. Like, think about your navigation menu. In addition to that, we can supply an entire list of different URLs, or what we call a visual test. That allows you to quickly be able to validate just a series of URLs for you know, a certain visual representation and flag you or notify you of any sort of deviation from that.

Joe Colantonio
Nice. To use mabl, is there any setup that you didn't have in place like CI/CD, Git, Jenkins, etc.?

Andrew Horgan
Now, that's a great question. So we make it easy to integrate with those tools. By no means do you need any of those tools to be able to get the most out of mabl. In addition to being able to trade tests and run those in an ad hoc manner, or on a schedule in the app itself, you can trigger deployments via the CLI. So if you're developing code, you can create a deployment event just directly from your terminal. Or you can credit deployment events directly in mabl where you can supply all sorts of parameters around running these tests against a URL override. So if we're deploying to an ephemeral environment, you can set that at runtime. In short, while it certainly helps, if you have CI/CD in place, and you can integrate directly, there are plenty of other ways to be able to get the most out of mabl, even without those tools.

Joe Colantonio
Perfect. So why mabl, compared to other low code, similar tools available out there? What advantages is it over other software, not to dog any other software? But what's mabl known for, maybe that makes it may be a good use? A good tool for certain teams?

Andrew Horgan
I think the one thing that I find most of our customers appreciating most is the fact that mabl has taken a very holistic approach to just the full unified testing process. I keep talking about not only testing the UI but testing your APIs at the same time, in a very sort of mature and thoughtful way, alongside visual testing, performance insights, and accessibility testing. I think all of these things kind of represent the whole of the user experience. I think mabl is pretty unique and has an offering that has decided to produce a solution that addresses all of these needs. But then in a very holistic way, developed all from the ground up.

Joe Colantonio
Andrew, I don’t want to get you in trouble but here is some controversy. What is more reliable? Low-code tests or coded tests (like Selenium)?

Andrew Horgan
Yeah, I always find that every customer I work with has just sung the praises of the reliability of mabl versus a tool like Selenium. It's not to bash Selenium either, because I think Selenium is really useful for very small sort of segmented tests that we just have this massive sort of repository. But it's big and bulky. When things change, they change and break bad. Mabl is so iterative, in how we use the full context that again, for reliability from our customers. We often hear that they see an 80% reduction in maintenance when switching from Selenium. I think there's a place for the right tool for the right place. I think there's a world where lots of things can coexist. But in short, I do find mabl to be much more reliable when it comes to adapting your test scripts.

Joe Colantonio
Okay, we have time for one last question. Can it handle maps, like map box, and identify objects on a map? I know, it's always hard to let me know if you have a use case that you have?

Andrew Horgan
Yeah man, that's a good question. Similar to things like Canvas elements, where there isn't a DOM structure to kind of target part of that component is going to tend to fall out of scope for mabl. It's not to say we don't have customers that use maps but I've worked with several customers that do so. But we restrict the scope of mabl usage for that to be, as long as I click on the center of that Canvas component, my tasks will do what I need to do. Beyond that really, I think we're sort of limited to coordinate-based approaches to canvas clicks, which, coincidentally is another one of the snippets you can find on our public repo.

Joe Colantonio
Awesome, really cool stuff. Thank you so much Andrew for joining us today and sharing your knowledge for folks that want to learn more about mabl like you said, we do have the Learn More tab. And you'll also be getting an email after this with the link to this recording as well with links to other things to learn more about mabl, but any parting words of wisdom before we go?

Andrew Horgan
You know just one step at a time. Have some fun while you're at it. Thank you all for joining me today. And thank you, Joe, for having me.

Joe Colantonio
Lots of good stuff. I appreciate you. Thank you.