Increasingly, software teams are having to deliver faster while maintaining quality. To help teams build quality software, our number one job has always been to help teams easily create, execute, and maintain their tests. In this session, we’ll highlight all of mabl’s recent releases to help your team create better user experiences, and what’s to come.

Transcription

Thomas Lavin  

Our presentation is what's new and next in mabl and so let's get started. Again, so happy to be here with all of you for our second year of mabl Experience as well. I'm Tomas and I’ve been part of mabl since closed beta back in January 2018. I'm the product manager for framework and workflow teams. I actually started my journey in manual QA, and then moved to customer success and now Product Manager here at mabl. I'll pass it off to my colleague Juliette.

Juliette MacPhail  

Hey all, I'm Juliette MacPhail, and I've been with mabl for about two and a half years now. I'm currently the product manager for our browser and API software testing teams that really focuses on the core of the product around test creation and execution for both browser and API software tests. I've most recently been working on the mabl desktop app and our unified runner, which I can't wait to tell you more about. 

So there are a few different topics that we're looking to cover over the course of this session. To set the stage we'll take a look at the role of Quality Engineering and industry trends, as well as the benefits of low-code and intelligent automation in this context. We'll also talk about how mabl is approaching the space, our strategy for accelerating quality engineering, and the recent releases in the product that contribute toward those initiatives. We'll also leave some time at the end for any questions about our work, and we're really looking forward to hearing from you all. 

Before we delve into the role of quality engineering, I want to take some time to talk about how we're thinking about the space. The plan is laid out over the course of 2020 and 2021 and what we're moving towards when we think about development. So the core here is really about being the easiest tool for expanding test coverage. Before we can think about other dimensions of quality or collaboration, we need to ensure that the process of creating tests and expanding coverage across APIs, UIs and emulated browsers is as seamless as possible. That's always going to be at the core of what mabl does. 

Outside of this core, we're starting to think about how can we build on top of that test coverage. So once your team has sufficient testing across your application, how do we ensure that we're running those tests at the right time, in the right places, and frequently enough to provide rapid feedback. So that's when we start delving into the ways you can integrate mabl more deeply into your core workflows. So mabl works the tools that are already a part of your process and enables you to collaborate with your team during all stages of the development lifecycle, whether that's with Jira, GitHub, Microsoft Teams, and more. 

Beyond these workflows, we're also thinking about additional dimensions of quality. So you have test coverage to ensure that your application is functioning as expected, you're able to run those tests across your pipeline and collaborate with your team to make changes. In addition to QA, there are many aspects that are key to building a great user experience and that includes accessibility, performance, security, and more. Moving beyond the binary, into a world where you have a variety of insightful information that's guiding your testing, your team, and you're asked to make more intelligent decisions and investments. You'll only see this one expand as we get further to 2021 and beyond. 

First and foremost, it's an exciting time to be someone who's focused on quality engineering, because there are so many key trends in the industry. We are realizing that quality engineering plays a critical role in enabling innovation. Whether you're broadening your adoption of agile moving to DevOps, continuous development, continuous integration, perhaps your team wants to migrate to the Cloud or shift left. Quality engineering is ultimately an enabler for all of these critical trends. So today, we're going to double click on low-code and why we believe it's so important for our needs and our users today.

Thomas Lavin  

Why is testing and low-code so important? There are several factors at play. The reason really, why we're here is that a lot of the approaches we've taken historically have had significant drawbacks, especially around high code testing, relative to low-code testing. A lot of these automation solutions like open source, script-based frameworks have just been too hard. You have to be a developer learn the specialized automation framework, but also the good tester, a totally different skill set than being able to actually create the tests with the proper assertions, the right logic, and so forth. So there have been very few people who actually have that superhero-level skill set and experience in the world today. 

Additionally, the market is just so competitive for finding the right people, especially with such a high level of competency and experience. On the other hand, sometimes it's just too easy. A developer, just as an example, using Cypress or another tool can create some scripts and tests. But as we've seen in other areas of technology, whether it's Cloud computing, virtualization, or otherwise, sometimes it just ends up being too easy to do the basic thing. What you end up with at the end of the day is sprawl. So you end up with a lot of teams that have created 1000s of tests but still have bugs in production, they aren't moving their metrics forward. Because while it was easy to actually create the test automation framework, they didn't have an effective testing strategy. That's really challenging because it can lead to extensive maintenance. So oftentimes, you just create these far, far too easily. But many of your tests fail every time you make a small change in your application and so sometimes, your developers, your team ends up spending more time fixing the test than you did creating them, or worst-case scenario, even writing the code that they're testing in general. 

Really, the tragedy of test automation recently is that so many teams have spent engineering years building out these just complex test automation frameworks, they run them in the Cloud with some infrastructure as a service provider and they've invested millions of dollars in all of this. Then they have to throw out the entire test suite because it just becomes too complex and too unwieldy to manage. With that, they also have very few people who can actually participate in the ownership of quality and running that quality process effectively. 

So this tries to overcome that and it starts with the assertion that if you really want to automate a process, you have to build intelligence into it if you want to avoid those trappings that we just talked through. We know this intuitively, in places outside of test automation, for example, if you wanted to build self-driving cars is a great example. You wouldn't just say I'm going to build the engine, give it a set of instructions and have it go drive - which is effectively what we've done with test automation. 

You recognize intuitively that there's a lot more to just driving than that, you have to have a lot of sensors in the car and a lot of data, we also need to have GPS, you need to be able to read what's happening in real-time when you're out on the road because there's a whole other set of rules that the car and you as the driver have to participate in. If you're just providing those basic levels of inputs, certainly you're not going to be safe, but you're also not going to be able to do it effectively. You have to build machine learning and other models in place to be able to make those intelligent decisions that all of us as drivers have to go through training and certification for to get your license and once you can do all of that, then you plug that brain in the control plane at the thing that actually automates the driving. Then we really have some potential to do that. 

Effectively with test automation, that's where we're going now. That's why we're saying, look, it's not about the drivers themselves that can move a browser, move a mabl app or interact with an API, you really have to start with once you understand the intent of the automation, you have to collect the data and analyze that data first, and then make good decisions for that test automation framework, ultimately, to be effective.

Juliette MacPhail  

So low-code is really a key tenant of quality engineering. If we don't focus on low-code, then we're likely to limit the number of people and the roles who actually can participate in quality on our teams. When we talk about intent, we're referring to testing the things that you would look for if you were testing manually. We can separate that out from automation when the intent is actually manifested in the test itself. Automation can then drive the execution of it. So we let mabl handle this part and only allow teams to focus on intent and the functionality that they want to share. 

We present them with a low-code interface and we let the system handle the implementation. What that actually means is that not only developers, but manual testers, product owners, support people, and others can all participate in quality, and we don't end up in silos. 

Another aspect of this is once you separate the intent from the implementation is that we can build a system that can be very intelligent. So as an example, if the intent for a given test is to submit a form, and let's say there's a submit button on that form, perhaps my development team is looking to make changes, and they end up modifying the ID of that button. So with more traditional test automation solutions, that test is going to fail because it relied on the ID in order to locate that button. But in this new era, since we're collecting all of this information as we're running the tests, the system knows that even though the ID changed, the button is still there, and we can locate it using numerous different attributes and techniques. So the system will attempt to locate the button and proceed with the test. 

When we can actually identify that an element has changed. We can update that test automatically based on the information that we've learned, and that's what we call auto-healing. We're able to accomplish this by separating intent from implementation, letting the system handle that implementation, and enabling people to express intent with as little code as possible. 

With all of that in mind, I want to provide some additional context on how mabl is thinking about the space and how we can be a partner in your quality engineering efforts. So over the course of 2021, our top priority has been what we refer to as job one. You can think of, job one as really the core testing. So that's how you expand test coverage across use cases, browsers, devices, and more with minimal effort. 

I often talk about the importance of ease of use, and mabl’s role in leveraging intelligence to reduce the effort needed to create resilient tests and rapidly expand test coverage. So we talked earlier about how quality engineering plays a key role in industry trends. So whether your team is looking to become more agile or shift-left, we want to help you achieve those goals by embedding quality engineering into your environments, your teams, and your workflows. So from new integrations to reporting capabilities, are focused on investments that will allow you to scale and further integrate your testing. 

The last aspect here is making the transition from quality assurance to quality engineering. So outside of functional or end-to-end testing, there are so many other key dimensions that are essential for building a world-class user experience. We're looking to help us make this jump to a new standard of quality. 

Let's get into some of the more specific investments that mabl is actually making related to job one. So as you all may know, we recently released the mabl desktop app this past May. This has really been a key initiative for providing a unified platform for creating tests and labels. The desktop app actually enables you to accelerate the test creation process through stateless training sessions. What that means is that each new test will begin in a completely clean browser that exists separately from other browsers on your machine. That means you don't have to worry about cookies, browser state, or other tabs actually interfering with your test. It also allows you to get rapid feedback on your test by kicking off local runs directly from the trainer. This enables you to confirm that your tests are working as expected before you move those runs to the Cloud and also allows you to quickly test changes as you're updating your tests.

In addition to improving core aspects of a test creation process, and enabling local runs directly from the app, it also allowed us to expand it to additional types of testing. I'll talk about this a little bit more in a moment. But the desktop app also provides our full suite of API software testing features, allows you to train mobile web tests, and will eventually allow us to expand into other verticals such as native mobile. 

As part of this effort, we're also moving test creation fully over to the desktop app at the end of 2021 and we'll be sunsetting our legacy Chrome extension at that time. API software testing also became generally available this year alongside our desktop app. So if you've used API steps enabled before, you might be familiar with some of those core features, but API software testing really takes us to a new level by taking those steps outside of the browser context. With API testing, we've heard from our customers again and again that they want to empower the people on their team to quickly and easily create API tests without the need for extensive coding experience or too much manual work. 

You can easily create these API software tests directly in mabl. Or you can use our bi-directional integration with Postman to import any existing tests that your team might have. The benefit of having this unified platform as you get the core functionality that mabl provides, which includes combining API and browser tests in the same plan, sharing variables between them, and having a centralized place for reporting across your test cases. 

Through combining API and UI tests in the same plan, we've seen a number of customers who actually use this API test to set up their test data on their browser tests, and then use those API tests to tear down that test data afterward. Because these tests are performed at the message layer, they execute incredibly quickly and allow you to really target your UI testing efforts. We've also continued to release incremental improvements and enhancements API software testing, which will continue into the future.

Thomas Lavin  

This next item is really coming soon to the mabl app, but it's very widely requested. We are excited to share it with all of you ahead of the actual release. This is parameterized JavaScript snippets. What this really comes down to is greater reusability in general ease of use, especially targeted at your manual testers, people without technical experience, or people who just may not know JavaScript. Going back to our earlier point on low-code, the inspiration was really to help separate the intent, which may be JavaScript snippet may be used to format something and add it to a specific URL that you're going to visit from the implementation, which is actually maybe generating a variable, creating a different value, calling back and doing all of these specific things that someone who doesn't have the experience or doesn't feel comfortable using JavaScript. They don't actually have to interact with that. 

With that, we've also added a number of improvements to help you better test changes to new and existing JavaScript snippets without having to jump into the actual JavaScript, as well as support for adding just general identification-like descriptions more easily. With that, better support for larger responses that you may be feeding to API steps. Or perhaps you've generated a large JSON response and previous API test, you're saving into a variable, you pass it along to the next stage of your plan and now you actually want to use that, or change it somehow, and send it off again. So you're able to pull out that full JSON, as opposed to just using a smaller editor to view that.

Juliette MacPhail  

I'm very excited about this one, my team is working incredibly hard on this, so I'm happy to share it with you all today. So the unified runner serves as the next generation of the mabl testing service. You'll often hear us refer to it as unified because what's actually happening here is we're bringing that same framework we use in the desktop app and our command line interface to the Cloud. What that means is, we're able to provide a more consistent test execution experience across all these different methods of execution.It also provides some considerable speed improvements around 42 percent faster than our legacy runner. So it's been really exciting for me personally, to see our customers try out the unified runner and benefit from these enhancements. Especially if you're running tests as part of your CI CD pipeline, it's a really great way to shorten that feedback cycle and ensure you're getting the most consistent execution experience. 

In addition to making the unified runner on Chrome generally available in the coming months, we're also looking to expand to additional browsers, so that includes Edge, Firefox, and Safari. I will also note that all runs on the unified runner are free during the beta period. So if you've not had the chance to try it out, I would highly recommend it. 

We've also talked quite a bit about the importance of expanding test coverage across devices and browsers as part of the effort we released our mobile web beta this year. So the desktop app now allows you to train tests against emulated mobile devices on Chrome, and execute those tests across a wide range of device profiles. Testing on mobile devices is really becoming more critical than ever.

In 2020 over 60 percent of US website visits originated on mobile devices. Historically, testing sites on mobile is not an easy task. We've really seen the value of easy test creation and execution from mobile across our customer base as well, with some customers running about 40 percent of their mabl tests on mobile devices. This allows your team to validate the user experience across responsive applications, delivering a seamless experience for your users, regardless of what device they may be using.

Thomas Lavin  

Next, let's take a look at how mabl has been supporting your core enterprise workflows. This one I'm also particularly excited about it's our native two-way integration with Jira Cloud, which really is best in class. I'm not just saying that because I helped build it and I'm biased but it's really what you've been telling us and what we've been hearing from all of you who have been using it to create now 1000s of Jira tickets at this point. 

It's really about making sure that we can identify issues and triage them as quickly as possible. So you can see here, there's a failed test, there's a create issue button in Jira and when you actually create that issue, or a bug report, basically, you can see that it automatically populates all the info, the developer needs to be able to understand and really triage the issue. So it's a screenshot of the status at the end of the test, also a HAR file, network logs, and more. With all of those links at the bottom, the developer can actually use the CLI to kick off runs to locally verify, and just make sure those changes are working when they're actually creating a build and when they're actually going to merge that into production. They can send those results to the Jira ticket, showing proof to say here it's passing and I can show you all the diagnostic info again that it's working. Ultimately we're trying to eliminate surprises there. That's why you can run the same test locally in the CLI to verify that it's going to work later when it hits production. This is something we really believe is critical to just being able to react quickly and deliver great customer experiences for our users. 

This is another thing that we're really building out the core foundation of mabl’s reporting in the future, and its release coverage. The goal here, at least when we initially designed it was to create a single source of truth for our own internal testing on our workflow team, we wanted to make sure that we were creating new tests with new feature development, actually running the tests that we created, and having them all passed before we made changes. The ultimate result is this, which gives you a way to track the progress of your testing, and view a lot of the data that mabl already had but wasn't surfacing.

You can also set target trends here to make sure you're sufficiently running all of your tests or 80 percent of your testing against a given release. You can also just target specific feature areas with test labels to make sure you're looking at the right one. 

Another thing that we've heard from all of you that's really critical is bringing alerting and insights to where your team's working, whether or not they have direct access to mabl. Some things ultimately just can't wait for an email. That's why we now have robust integrations with Slack and now Microsoft Teams. This is one of the newest releases that is live in mabl. So as soon as a failure happens in your workspace, or whatever you've configured the integration to send on, you'll get these messages in Slack right away, or Microsoft Teams in this example right away. If it's a failure, you'll get a link to view the output as well as some relevant info directly within the message. With this, there's no limitation on the number of integrations, feel free to add as many as you'd like, or combine them to send certain specific messages to certain channels.

Juliette MacPhail  

mabl wants to be a partner in your quality engineering efforts. In order to be a good partner, we need to ensure that we're maintaining high security standards, mitigating cyber risk, and protecting your data. As part of this effort, we recently achieved SOC to compliance. What that means is that we're auditing and continuously tracking how we manage data security, availability, processing integrity, confidentiality, and privacy. Moving forward, we have an ongoing commitment to following data practices and data management and ensuring we're doing our part to protect our customer community.

Thomas Lavin  

Lastly, we'll touch on some of the ways that mabl is enabling quality engineering directly, and will continue to in the future. As mentioned earlier, we really want to weave intelligence in your testing and a core part of how we approach that now is letting mabl do a lot of the hard work on aggregating and delivering it to you in an insightful way. That's not to say you can't access the nitty-gritty details; you can. But it takes tremendous effort in traditional high code tools to pull all of this data together. This performance training data really is the first step here allowing us to roll all the data that mabl has already collected over the past months, and show it to you without any setup. 

We give you the tools to slice the data how you'd like, although there's a lot more coming in this regard. You can track it all the way from the workspace level down to a specific test run or test level, as Dan showed you in the keynote yesterday. In this example, in particular, you can expect a lot more from us soon, particularly around some exciting areas like accessibility. Again, the key here is there's no implementation needed. You don't need to build all of this yourself. All the data that populates this was actually the same data that we saw on the Jira ticket earlier, that's used as details that developers can use to actually triage and solve an issue. 

Just to wrap up, I wanted to say a big thank you to all of you for joining us. For a final review, we talked about how quality engineering plays a critical role in enabling great user experiences, which is really who at the end of the day we're here for, we're here for our users. We also talked about how low-code is central to an effective, realistic, and scalable testing strategy that democratizes quality across your team. We talked about mabl’s job one with continued investments and across APIs and UIs, making that testing easier than ever and now even mobile and emulated mobile testing. We also talked about integrating deeply in your core issue tracking workflows with Jira, communication with Microsoft Teams, and delivery workflows. Lastly, we touched on bringing intelligence to your testing without extra effort to give you the tools to make the jump to Quality Engineering and performance and soon, perhaps accessibility. So thank you so much. Do we have any questions here while you have us?

Katie Staveley  

Thank you, Thomas and Juliet, we did have a couple of questions come through. So I'll start with the first one here. Can we use a test trained on desktop for mabl web app testing?

Thomas Lavin  

I can answer that one. When you create the test for the first time with the desktop app, when you actually launch that test and select the specific device you'd like to do, that will actually emulate that in Chrome, and you can train that test for responsive web. We don't have native mobile testing today. But when you run that test in a plan, for example, it will retain those settings, so long as you select to do the emulated option, and will run the exact same way that you trained. So that's one of the big advantages of the desktop app, is that you don't have to dig into dev tools and change the actual state of your browser, the main one that you're using for all your other tasks.

Katie Staveley  

Great, the next question we have here is: if the unified runner is bridging the gap between Cloud and local, what will be the differences?

Juliette MacPhail  

That's a really great question. So the key thing to note here is that we're using the same execution logic across both of those, your test should run exactly the same way, between the unified runner in the Cloud and your local execution. The key difference there is test artifacts. So you'll notice that when you run a test locally, it'll execute even faster than in the Cloud. That's because when you're testing in the cloud, we're collecting DOM snapshots, HAR files, all of that really valuable information for diagnosing any potential failures or what's happening in your application. So you should see the same execution across both of those. But if you're looking for really fast feedback, to make sure your test is working as expected the local runner is a great application for that.

Katie Staveley  

Terrific. Is the performance data exportable?

Thomas Lavin  

We have something coming soon, I believe it may be in the app already that will allow you to pull that data from the results page. So that will be cumulative, we have the cumulative speed index. The total time that your application took to load as your user would experience, you can get that today, and export those results to CSV. So there's a lot of controls you can do there where you can pull results over a huge period of time if you'd like. We're also exploring areas to get more of that data out and you should see it in more areas of the mobile applications and that's the speed index value that's populating all of the performance charts that we showed earlier.