With the growing importance of ensuring excellent user experiences, end-to-end tests are critical to preventing bugs from affecting your users. Jeff Zupka will walk through how to use mabl to build and optimize effective end-to-end tests using mabl. Attendees will also learn how to integrate all functional tests such as API, email, and PDF into their tests.

Transcription

Jeff Zupka  

So just quickly about me, I'm an engineering manager here at mabl, been with the team about a year and a half. But I've done software development for many years. A fun fact about me is that I was a mabl user first before, and I joined the team. A few topics that I'm going to be covering today. So I'm going to talk about why intelligence, and then testing is so important and especially important to user experience and the impact that it has on that. Also, I will talk about some strategies for building effective end-to-end testing in mabl, and especially how API testing can really fit into and level up your testing strategy. 

I want to start with this quote that I came across recently. It was really, I think, the inspiration for a lot of this presentation. It's from a gentleman named Shep Hyken. He's a customer experience expert, and now a speaker, and what I really like about it is it really pushes us to broaden, what we think of is a part of user experience and customer experience. So it's to every interaction, it's not just interacting with your web application. Sometimes it's not even the parts that your user can see, it's really end-to-end. It's everything. I think that's a really powerful shift in perspective when you're thinking about how to test those experiences. 

So why does this matter? Well, your user experience really is a major differentiator for software teams. A couple of data points here that I think is really interesting that we found. So, 86 percent of purchasers are willing to pay more for a better experience. Over 80 percent of companies compete on customer experience alone. Over a majority of employees, over 50 percent of employees are unhappy at work because of the software that they use. It's not just that experience is important, but your users' expectations of what those experiences should be, are so much higher. So you know, they're expecting that your app will work across multiple browsers and devices and work well, that they'll incorporate all these different touchpoints that a lot of times are reliant on API's and that they are transparent, it works seamlessly, that incorporates a lot of different forms of content into that experience. Not to mention that a lot of applications, especially mabl are typically built on your third-party services and software really makes it the foundation of how that application works. 

But testing these experiences can be really hard, you could try to do it using fully browser-based tests, they tend to be easier to create. But that can lead to tests that are much longer and slower and ultimately less reliable. Also, the reliance on third-party software introduces a lot of new risks in testing. So if your test involves you're interacting with a third-party UI, and that the user experience there changes unexpectedly, suddenly, you have this test that's failing for nothing to do with your application, and that just doesn't feel right. You can try a more divided approach. So testing parts of your customer and user experience in different pieces. But that can make your team a lot less efficient, can lead to lower test coverage overall, you might be duplicating your testing efforts instead of broadening them. It also can hurt collaboration between your development team, your quality teams and that's not great either. 

It doesn't have to be this hard, and especially when you're using mabl. So let's go through some strategies for you to know, how to create effective end-to-end tests. We'll do that by walking through an actual example of a test that the team I’m a part of that we were working on recently. So my team was working on an effort to migrate our authentication provider to a new one recently. Obviously, authentication is an incredibly critical and important part of the mabl system. It powers our signup login, access to the app, access to different resources within that application so incredibly important that we had great coverage for that. It was mostly behind the scenes, but it introduced some additional changes into the first-time user experience that we had to account for. 

The experience was broken up into four key parts. So there's submitting this trial form here, where you're providing an email and a few other pieces of data. Once you submit that, you're opening the email that you sent clicking on a link that will take you back into the application to set a password and complete the activation of your account. The very first thing we do once a user has landed, log in and lands in mabl as we have them install the desktop application, you cant create a test in mabl without that. Then finally, we encourage users to create their first test and go through that experience. Really key journey for us, you know, in our product. 

One of the engineers on my team, Anja, was tasked with making some updates to some of the tests that cover this experience and it was a really interesting one because it had a lot of fun touchpoints. So there's an email, generated an email, accessing that email, dealing with authentication, obviously, which was provided by a third party, of course, the mabl web application, but also endpoints in our own API that didn't have any user experience UI associated with them. So a really interesting case that covers a lot of the things that you can do in mabl. 

The very first thing that I would ask you to think about when you're creating any test in mabl is, you really think about how to set yourself up for success. What that meant, in this case with our example, was that I mentioned authentication, an incredibly critical part of the system. So obviously, we want to have very high test coverage, a really great suite of tests running on this flow all the time in different stages, to ensure that we're able to deploy, with high quality and a lot of confidence. Additionally, we have a lot of tests that are ready, testing this flow existing tests that we couldn't go and change, because they would break for the current flow while we're working on this new one. 

So how do we deal with that? There's a lot of great ways at mabl, our deployment process, we already have this great process of you open up a pull request in GitHub with your changes and we immediately create these preview environments, which is basically just a new version of the app that if you have a URL you can interact with it you can test with it makes it really easy to test changes as you're working on them. We can load these environments and configure them in mabl and begin to run tests of that, including any new or updated tests, you're taking advantage of branching in mabl, which is a really great way of really isolating these changes. Additionally, labels are really helpful here to segment your tests, categorize them into different functional parts of your application or by team that really makes it easy to both organize your tests and run them in different ways. So in our case, running all of our authentication-related tests from the CLI or as a part of our deployment process. 

It's also really important, and a best practice for us to be able to test these changes very early in the process. API tests are a great way of doing this. With our team, and I'm assuming with a lot of other teams, we tend to build out our API and back end changes first, sometimes before we even start to work on the UI parts of it. So rather than waiting until the UI is up, and ready to be tested, and holding off on writing tests until that point, a great strategy is to build out some API tests to start testing that right away. It's a great way to show that your API's and changes are working correctly, but also a great way to really get your development team, your front end engineers, back end engineers, your quality team on the same page right away and understanding what the scope of these changes are. 

As Dan discussed yesterday, we have this awesome unified runner that really makes it easy to run mabl anywhere. So we take advantage of that all the time in terms of running these tests locally with the desktop app, or in our CI/CD pipeline, you know, taking advantage of the different integrations that mabl provides there. It's also really important to strike the right balance when you're creating tests. So again, going back in the few slides, you could use a browser test, to fully test a lot of your end-to-end testing journeys, but that can make them quite long, and oftentimes, that might mean testing other applications that aren't yours, third parties or even other parts of your application that aren't really relevant to the scope of this test and so it's really important to test the UI that really matters for that test and there's a couple of great ways to do that. 

Going back to our example of this onboarding experience that we're testing, as I mentioned, there's a part of that experience, which is about installing the desktop app. In this case, that experience wasn't even really testable within mabl, it requires downloading our desktop app and installing it on the iOS, loading it up and you just can't do that within mabl right now. So what we were able to do is ultimately, it's just an API call to one of our endpoints to actually set a flag that records that the user has installed the application. So really easily, we were able to create a JavaScript step that gets the token that's stored for our UI application and then a second step with an API request actually sets that flag. So in this case, we didn't have to deal with any of the UX around installing and opening the desktop app at all, we could do that all within our test within mabl just with those types of steps. 

Additionally, visit URL steps are a really great way to avoid testing parts of your application navigation that again, aren't really relevant to the test, the purpose of the test that you're creating. Oftentimes, the common pattern is logging into the application, and then navigating somewhere, but the visit URLs step is going to be a lot more reliable, and just skips over those things that aren't really important. It's also really important to keep your end-to-end tesingt focused and that means keeping out things that don't need to be in that test. Again, there are lots of great ways to do this, a few that we found and that we use. So the fact that you can just combine and mix API and browser tests in the same plan is credibly powerful. It lets you take advantage of shared variables. So you could have some API tests that are executing certain behavior, such as an access token, when you're talking about authentication, store that to a variable and then leverage that in other tests. So it really allows you to create modular tests that can be really focused on the things that matter. 

Additionally, plan stages can be really helpful here, especially if you have logic that you're using to use stage some data for your test, create users, create other data, and then clean that up afterward. Plan stages are a really great way of executing that type of logic. Another thing that's really key is, you have this test, you have it running, you're running now, maybe you've added to a plan that's running regularly or is triggered from a deployment event. That's great. But running these tests at scale can introduce a lot of new challenges, especially if you're testing against a persistent environment. So you can imagine, you're testing that when you create a user that that user is showing up in the list of users and mabl team management view. But over time, if you continue to add those users without cleaning them up, your test is going to become a lot more complicated or even unreliable to now find that user in that list, especially if there's paging involved and so it's really important to when you're starting a test that you're ensuring that you're starting with a clean slate and starting in the expected state that you're looking for. 

There are a couple of great ways to do that. As I mentioned, you can set up and tear down logic is a great way, and this image here, the screenshot here is an example of a plan that we use. So we have these plans, we call them janitor plans or cleanup plans that will run maybe once a day and they execute a bunch of tests, which are just cleaning up data from the previous test runs that we just don't need anymore, that could cause these UX issues. A really great thing I want to highlight here, again, talking about API tests is one of the huge advantages there. It's just how fast they run. So, in this case, there's two stages here. But the browser test it's a lot longer generally they are, they have to load the browser they have to find elements in the browser. So they're going to be slower by default. But these API tests, because they're just interacting with an API can be incredibly fast and generally more reliable. So again, being able to mix API tests and browser testing, the same plan is incredibly powerful. 

Another great use of API tests is really targeted at making it faster and easier to debug issues that your tests run into. Inevitably, there are going to be failures. But using API tests to isolate and API tests to identify where those failures are coming from, can be really powerful and make it really easy to track down and identify where that issue is happening from. So there are things like having an API test that verifies that your environment is up and running and reachable before executing a bunch of browser tests that may ultimately fail, you could just have one quick step that tells you that right away. They're also great for third-party interactions. Again, if the service that you're using has API access, using that is generally going to be easier, faster, more reliable, and also a great way to identify if that third party service is having issues or if it's introducing regressions into your overall system. 

So going back to our example test here, we created this test that's going through and validating this new user experience, it's running well, we feel great about it. But you'll know how we really know that we're using it well, that we're doing a good job of using that test to release with confidence to make sure that we are validating the behavior in our application, ensuring that it's high quality before it goes out in front of customers. That's really where a lot of the coverage capabilities within mabl are super important and valuable. As Dan mentioned yesterday, and probably a few other presenters, that release coverage feature was a feature my team had worked on as well, relatively recently, and it's really great for looking at your overall suite of tests, or maybe a segment of those tests. So in our case, we might break it down by using labels to identify tests that deal with authentication or tests that belong to our team or within the scope of ownership of our team and then get a lot of really valuable insights about how those tests are operating, how those parts of the application are doing from a quality perspective. Are we covering all the things that we need to, are there any gaps in coverage? Do we have any flaky tests that are failing frequently, even if they're currently passing now? Have we introduced any performance issues into our application with recent changes? So this was a really valuable tool to be using during this process where, for a lot of the time, we were incrementally shipping code behind the scenes, for our authentication, migration, and really needed to make sure that we weren't accidentally introducing any regressions into the overall system. 

Zooming out a little bit, it's not just testing, we want coverage that's not just for the full antenna journey. But it's also incredibly valuable to be able to have coverage across different environments and scenarios that your users will be encountering. Going back to the quote earlier, every interaction with your application with your company is an opportunity to really delight your customers. So the faster that we can do that, the better our test coverage will be. So there's again, great ways of doing this in mabl. It still amazes me that in a few minutes, you can create a plan, add a bunch of tests to that plan, and then have it run against a bunch of different mobile or web environments, you know, maybe the most common ones that your users might be using. That's really powerful to be able to take those existing tests. Then very quickly expand the coverage into different environments. Data tables are another great way to do that and something that, especially with login and authentication we take advantage of. So we have some tests that will go through a bunch of different login failure cases and verify that they're failing. So we expect to show users the correct helpful message. So again, another really powerful tool to let you expand coverage. 

Finally, and maybe most importantly, you're working as a team in mabl is really the main driver of how you're going to have a successful end-to-end testing strategy. So recently, a few engineers on our team have been evaluating how accessible and approachable our configuration and setup in mabl was for new hires, new engineers, new quality engineers, the rest of the team. It definitely turned up a lot of great patterns that we either were using or have started to use. So again, things like labels, labeling by the feature group, or the functional group, or by teams, it lets you not only organize your tests really well, but you'll also be able to get insights and very focused ways for different segments. Eco steps, another great one really lets you create test documentation, in a way lets you break up that test, and give someone who may be seeing it for the first time, a better understanding of what it's trying to do. One thing that I always forget all the time, renaming test steps, when you have the trainer open, you can double click on a step and add a different or descriptive step description, they're incredibly easy and another great tool, and custom email prefixes. 

If you're leveraging mabl email testing, mabl mail, you can assign a custom prefix that will let you differentiate different emails within your tests and make it easier to understand what's happening, especially if you're working with multiple emails. Then hopefully this is obvious, but the ability to reuse logic across different tests, different plans, incredibly important, and a really key feature of mabl again, something, we take advantage of being able to use reusable flows, capturing logic. 

For this example, as I've been walking you through, we noticed that the logic to do signup within mabl was duplicated in a few places. So it's really simple to just pull that out into a flow and then use that and all the tests that are needed to do that. Same with snippets and having modular, reusable pieces of JavaScript that you need in your test. With both of these parameterizing super important. So parameterized flows make it easier to reuse across different environments, if you need to pass in different variables into those flows, and soon parameterized JavaScript steps, which are going to be a really incredible extension of that feature.

So, finally, just wrapping things up here, you know, a couple of key points that I hope you take away from this. The first is, just how important end-to-end tests are to delivering great experiences and experiences that your users will love. Second, an effective automated end-to-end testing strategy has to be a few things, it has to be balanced, so balanced from a focus of those tests and performance standpoints. But also diverse in terms of taking advantage of all the rich toolset that mabl provides, and really combining API tests and browser tests. So to really ensure that holistic end-to-end coverage, your test really needs to be balanced and diverse. Then finally, you're going back to working in mabl as a team, real success is dependent on the effort that you're putting in to use mabl with intention and planning and preparation and teamwork. So all the effort you put into it won't matter unless you're really being intentional about how you're using mabl and how you're using your team and making sure that you're setting up your whole team for success. So that is it. And thank you again for listening today. I'd be happy to take any questions in the few minutes that we have left.

Kate Peterson  

Thank you, Jeff. We do have a lot of questions in the chat. I will say, I'm sure we won't get to everyone. But hopefully, we'll answer some questions and if not, please feel free to reach out to Jeff or anybody on the mabl team with your specific questions that we don't get to. So one question that we have right now is, at what point in the process should I be running end-to-end tests?

Jeff Zupka  

Yeah, that's a really great question. Ideally, you're early and often, as I think I hit on earlier in the presentation, so I think that's, again, one of the things that we've tried to provide it in mabl is the ability to run those tests, very early in the process, even as in the development process, and building out the code and user experience. So you’re using the local runs, and the desktop app, you're using mabl in your CI, CD, pipeline, things like that. So the sooner that you can start testing those end-to-end experiences, the more confidence you're going to have, that you're really delivering a great high-quality experience.

Kate Peterson  

I think we have time for one or two more so another one is, how should I decide what parts of a journey to include or exclude in my end-to-end test?

Jeff Zupka  

Yeah, that is the million-dollar question. That's a really good question. The short answer is, it depends and it will likely be different for everyone, for every team. I would encourage you to think about the journey, the experience, and what is the starting point to the endpoint where your user is really seeing some value, taking away some value from your application, to guide where to draw those lines. But from there, it's really balancing a lot of different tradeoffs. So how fast or slow your test runs, the longer it is, how likely it might be to fail, to be less reliable. Of course, trying to pull out anything that feels unrelated or could be shared across tests. So, again, using API tests for interacting with third parties, or passing in shared variables, when that makes sense. So, taking advantage of a lot of those features to make your tests more modular, will let you have, better, more performant, more reliable tests within mabl.

Kate Peterson  

Great and one final question, Jeff, does mabl allow you to extend end-to-end tests to email?

Jeff Zupka  

It does, yes. That was incredibly valuable to this test of our new user experience. Also really easy to take advantage of the mabl mailbox feature. So really, it's just creating a variable that generates a new email address, and then using that in the signup form, and then adding another step to access that inbox and look at that email and in this case, an activation email, assert that it's correct. Access the body of that email, click the link. So a really important tool in our toolbox. As email is a really important part of a lot of user experiences.