Cloud, DevOps, and customer expectations are transforming software development faster than ever before. Quality engineering is helping teams navigate those changes, deliver new products faster without sacrificing quality, and harness data to improve the entire development process. In this keynote, explore how teams are adopting quality engineering, and learn how to expand QE in your own organization.

Transcription

Dan Belcher  

Hello, everyone, and welcome to Experience. I'm Dan Belcher. A little background on me. I'm one of the founders here at mabl. I've spent my entire career working on enterprise software. I'm particularly passionate about building software teams and building tools for software teams. I'm very excited to be here with you today. In this keynote, we'll cover five broad areas. The first we'll discuss is this moment that we're having in quality engineering. Then we'll talk about mabl’s particular focus on low code, software test automation. Next, we'll talk about our aspirations for Quality Engineering and the role that innovation plays in that. We'll talk about some recent enhancements to mabl and then some that are coming up and then finally, we'll focus on the outcomes, the benefits that we're working with our customers and partners to deliver to our businesses, to our teams, and to our customers. 

But first, let me welcome you to Experience 2.0, the second version, the second release of our annual user conference. We are so excited to be back here with so many customers, prospective customers, partners, and just members of the quality engineering community. I want to thank you all and I also want to thank the mabl team that's worked so hard to put this event together. Just so you know, you're joined by 700 plus fellow attendees. We have 13 sessions lined up for you over the next two days, and over 20 speakers at last count. So it should be a great couple of days to talk about quality engineering. It's also a very diverse group. About two-thirds of you are quality assurance, testing quality engineering professionals, about 14 percent are developers 21 percent other that includes product owners, product managers, support analysts, and more. About two-thirds of you are individual contributors and we're about a third of managers, executives, and so forth. So it's a great representative kind of balanced audience today. 

We are also so excited to welcome so many of our customers and partners to present at this experience event. Atlassian will join us in just a few minutes and then we have customers presenting across so many industries, financial services, eCommerce, technology, and more. The attendance of this event represents a very similar distribution to our customer base. So the mabl customer base is a little less than 50 percent in the United States, we have a large and growing contingent in Japan, a solid base of users in Europe, many users in India, and then I'd say a small but growing population of users in South America, Africa, Australia, and New Zealand. It's certainly a global movement. 

The mabl team is becoming more distributed to reflect that. So while we are a US-based company, and we have many employees in the United States, that group is increasingly distributed across the United States. We've, for a long time, had employees in Europe and in India. Then this year, we added a team in Japan and another team in Argentina. So you should expect to see mabl will become more distributed in the coming months and years. I also need to provide a plug that mabl like so many of you is, hiring. We're hiring across all functions, sales, marketing, engineering, support, quality engineering, and otherwise so please don't hesitate to reach out to jobs@mabl.com if you're interested in learning more. 

In this moment, quality engineering is having in the market and in the industry more broadly. Our view is that there's increasing recognition, especially at the executive level, on the importance of quality engineering, and enabling these incredible transformations that we're looking to deliver for our business. They accelerated digital transformation, taking brick and mortar experiences, and delivering them digitally over the web or to mobile or otherwise, DevOps transformations trying to accelerate time to market by applying DevOps principles and software test automation and then finally, technology transformation, whether that's moving from private Cloud to public Cloud, or rolling out microservices, or enhancing our user experience by rolling out single-page applications and more. 

I think one trend that I've seen very broadly over the last 12-18 months has been the increasing recognition from CTOs and heads of engineering, that none of these transformations happen without significant investments in quality. And that quality engineering in many ways is not only an enabler but also defines success for these transformations. Because we need to define what high-quality experiences look like to our users and ensure that we're delivering high-quality services and experiences before, during, and after, you know, any transformation. The data really backs up the idea that quality engineering is a key enabler. For example, when we think about software test automation, this year's State of DevOps report highlighted that the vast majority of people who have a high degree of testing and deployment automation, that a high percentage of those people are confident in their ability to make changes without impacting their users. Whereas teams that have a low degree of testing and deployment automation, are very unlikely to believe that they can make changes with confidence. 

Our own research this year really magnified this trend and so we surveyed over 600 software professionals touching on many, many areas and this chart really stood out to me, among all the others. So what this is showing us is that relatively speaking, test coverage has an incredible impact on your team's ability to roll out fixes if you achieve excellent test coverage. The thing that I would draw your attention to is the difference between really good test coverage and excellent test coverage. So what we're saying here is that 60 percent of users who said they have excellent test coverage believe that they can deploy changes, for example, bug fixes, within eight hours of discovering them, whereas only 37 percent of people who had really good test coverage felt the same way. When you think about it, this is highlighting the ability of people who have excellent test coverage, to make changes, and rely on the testing that they have in place to unearth any issues. If you can't fully rely on that, then you end up wanting to do things like letting changes bake in a staging environment overnight, or doing manual testing in addition to the automated software testing. So many of our users are really in pursuit of this excellent test coverage because that's where we see the dramatic payoff in terms of being able to move quickly. 

Another dimension of this is customer satisfaction. It's just staggering again from the same report and research. The staggering difference in customer satisfaction between teams that believe they have low test coverage, and teams that have high test coverage. So 80% of the teams that report having high test coverage, also report having high customer satisfaction, whereas only 30 percent of the teams that have low test coverage report having high customer satisfaction. It's just as staggering to see that only three percent of the teams that have high test coverage believe that they have low customer satisfaction and so I think that contrast is just staggering. 

So the data proves out that investments in test coverage investments and quality engineering, pay dividends, and mabl since the beginning, we've always viewed our first job as making software test automation as easy and accessible as possible. We want to empower anyone on the software team to contribute to quality by delivering a low-code software test automation solution. This year, we made really good progress on making it even easier for people to create and maintain effective end-to-end test coverage. 

Last year, at Experience, we previewed a new interface for mabl that we call the unified desktop application. That desktop application is the primary way today for users to create, maintain and manage their tests in mabl, this replaced a Chrome extension called the mabl trainer. Using the desktop application, you're able to create tests that are much more reliable in terms of their results on the first attempt and that's because the desktop application comes bundled with a local runner that uses the same technology now that we use in the Cloud, and so you have consistent test results, whether you're running your test locally, or in the cloud. The desktop application also unlocks new capabilities for us, including mobile web testing, and end-to-end API testing. We'll talk more about those in a minute. 

The desktop application has seen significant adoption across our user base, we're approaching 80 percent of all test edits happening in the desktop application rather than the legacy extension and we'll see that number go up all the way to 100 percent in the coming months as we retire the Chrome extension. This year, we also introduced a unified test runner. This runner uses exactly the same technology, whether you're executing tests locally, in your continuous integration environment, or your build environment, or in the cloud. So the unified test runner is available today in beta for Chrome, and we'll move that to general availability before the end of the year. One of the most significant benefits of the new unified runner is that it's much faster than the existing runner. So what you're looking at in this chart is a plot of the speed in comparison to the v1 runner of the v2 runner. Anything left of the y axis means that it's faster, percentage time, faster, anything to the right means that it's slower. As you can see, out of 100 or more tests, only one is running a little bit slower than the unified runner, whereas the vast majority are 40, 50 percent faster, or more. So we're very excited to bring this unified runner to general availability for Chrome this year. We can see day by day now, the adoption of that unified runner accelerating, and it's available to you free of charge as part of our mabl subscription today in beta. So these investments in making it easier for you to create and maintain reliable end-to-end tests are paying dividends and nothing inspires us more than hearing customers talk about the tangible benefits that this low code testing approach is delivering for their teams. 

We'll hear today from Sensormatic talking about an 80 percent reduction in effort to create tests and how they're able to improve test coverage from 30-96 percent. You hear customers talking about test creation happening three times faster than legacy script-based solutions. We also hear about regression cycles being six times faster using mabl than existing solutions. Also being able to reduce regression cycles plan this part of sprint by 95 percent. So these measurable benefits are so exciting to see. 

Anecdotally, I just received this quote from Barracuda, one of our great customers and you'll hear from Andrea as part of the event as well. One of the senior managers that Barracuda passed along this quote and I really thought that it captured our mission very well in job one, and how we enabled that team to move from a world where they had a significant effort that was necessary on an ongoing basis to maintain a bespoke, homegrown software test automation framework and that moving to mabl’s low code approach dramatically reduced the resources that were required to maintain that framework and allow that headcount to be freed up to focus on adding test coverage. But also, I think, just as importantly, how mabl and the low code approach empowers the entire team to participate in quality, and more team members are now automating tests. So this was great to see and again, I think it represents very well what we're hoping to achieve with this low code approach and making testing as easy as possible. That's really the first step for us. That's why we call it job one, let's make testing as easy as possible. So then we can enable teams to shift their focus, from quality assurance to quality engineering. 

What do we mean by quality engineering? There are really four principles that we talk about. The first one is how do we build quality throughout the software development lifecycle at every stage? Second, how do we think about quality across the entire customer experience? Third, let's make sure that we're expanding our thinking beyond purely functional correctness, to include all of the aspects of quality from a user's perspective. Then finally, in quality engineering let's find a way to use data to continuously improve our process and outcomes. So we'll talk about each of these in turn. 

The first is, how do we integrate quality across that software development lifecycle. We invested a lot over the last couple of years in enabling developers and testers to begin doing end-to-end testing as early in the SDLC as possible, and it really starts for many people, when they have a developer branch on a local machine or in a local build. So using the desktop app, you can trigger any test runs against your local build. So this is a great way of validating that local build before you even put the branch up for review. 

But, we know many developers like to work from the command line. So we also released a command line interface that allows you to start to create new tests from the command line, but also to trigger tests from that command line. Importantly, we make it again, as easy as possible to do that. So as an example, if I had a failed test, let's say I had a test that ran in the cloud, and it failed, because there was a bug and now I'm a developer, and I've attempted to resolve the bug locally, I can run a command from the mabl command line interface, that will execute the test exactly as it did when it failed. So using exactly the same configuration and parameters, as it did when it failed. So I know for sure that I've resolved the issue. Now we go from local builds to oftentimes the PR stage or pull request stage, this is a critical stage, because, for many teams, this is the last step before we merge the changes into a shared codebase, such as our main branch. Many teams have lacked the ability to do end to end testing in this stage, which is problematic, really, because if you don't know what the test results are going to be until the change gets merged, you know, oftentimes, then it's really too late and you have to sort of stop your pipeline. So we've added the ability first to run mabl tests inside of your continuous integration environment using what we call our CI runner. That will happen, for example, internally and mabl we do that as part of our build stage in CI. The second is, if you have, for example, preview environments that get deployed when you put a pull request up. You can use mabl to run tests against those preview environments and we have an integration with GitHub where you can see the results of those tests directly in GitHub as a check attached to that PR. So now before I approve a change to be merged in the pull request, I can see exactly what the test results are both in my build stage and against that preview environment. So this is another great example of how we can start to build quality in yet another stage of our SDLC. 

Last week, we had a visual sent to us by Ritual, one of our great customers on exactly this, their view of how testing happens from the preview environments against a feature branch, and then into their shared developer environment and into staging and prod, and how tests are executed automatically at each stage, providing them with visibility into quality at every stage in their release process. So that's what we mean when we talk about integrating quality across the SDLC. 

Now let's talk about quality across the entire customer experience. One important enhancement this year from mabl was the addition of mobile web or responsive testing. So many of our initial customers were able to create effective end-to-end tests against their web applications, even if that includes testing PDF or email and those capabilities have been part of mobile for some time. But when we think about testing the entire customer experience, so many of our users access our applications, from mobile devices, from mobile browsers, or devices with different resolutions. 

On mabl with just a few clicks, thanks to the new desktop application, you're able to run the same test across various mobile devices and resolutions. So responsive testing is one way that we start to extend beyond the core experience to include more of that customer experience. Another for many of our users is API testing. Many customers that we work with, offer APIs to their end-users or partners as a core interface, and yet before now, they didn't really have a great way of doing end-to-end testing for those APIs. So this year, we were excited to release to general availability, our end-to-end API testing capability that fully supports postman, so you can import your postman collections, you can edit the API steps from there, you can also export your mabl API tests to Postman. We've been very pleased with the adoption and uptake of this new capability. So those are two examples of how from a quality engineering perspective, we can start to think about validating quality across that entire customer experience. 

Now, the next stage in quality engineering that we talked about was this progression from making sure that we had the right testing in place both as part of our agile sprints and story development, all the way through to having effective tests created by testers and developers. But now let's talk about how we can use quality metrics to improve our work. This is a model that was put forth by Jess humble and David Farley in the continuous delivery book that I feel is a great roadmap for the path to quality engineering. So just over a month ago, we released a new feature called release coverage. This provides you with at a glance, access to all of the core metrics for your release, that release may be given a timeframe. It may be a version or otherwise. So it's very flexible to filter your test results to a given release and then see, well how many of my tests have run, how many passed? How many failed? How many are in a passing or failing state right now, have we added new test coverage as part of this release? Have we updated tests as part of this release?

The great thing about this is it's all built in to enable you don't have to have a separate tool to report on this capability. So this is a great example of making data available to help us improve our work. Because what I'd like to know is, am I delivering better test coverage over time, and is that correlated with better outcomes for my users? Another important enhancement, as part of that release dashboard, is the ability to identify problematic tests. So I can see at a glance whether I have tests that have a low pass rate, and I can ask the team, why is that? Let's investigate so that we can become more efficient with our testing. So this is another example of data that we now have at our fingertips to help improve our work. Speaking of data, we also have the opportunity to provide feedback to mabl. So as tests fail, I can label those tests with a failure reason and then that data is automatically aggregated and trended in that release dashboard. So I can see, for example, am I having more environmental issues over time? Should that drive some investments and improve my test environments? Or do I have more regressions over time that we're discovering in testing, and I can see exactly at what stage we're discovering those regressions? So again, important data that I can now use to improve. One key enhancement to mabl just recently was the ability to label many test failures at the same time. So we're trying to make it as frictionless as possible to categorize these failures. 

Now speaking of data, let's talk about how we can shift our focus to thinking about non-functional requirements in a particular performance. So every time mabl runs, in every step of the test we're capturing a great deal of data; we're capturing a snapshot of the DOM, we're capturing all of the API calls that are made in that step. We're capturing screenshots, but we're also capturing the page load time, from an end user's perspective. We’re now aggregating that load time, in many ways. So here, I'm looking at that release dashboard and I can see a little bit of a spike in page load time, right at the end of September. That's across my entire application. So I want to drill in there. I can see with one of my tests, a corresponding spike right at the same time. I can see that well, my application this test end to end, the total load time across all of the page loads for that test is typically 12-15 seconds. For that period of time, it spikes significantly to the point where for one run, it was over 50 seconds. So I want to drill in and understand that better and with a single click, I can drill in and I can see that yes, there was one test execution where the page load was way longer than the others. But it doesn't represent the beginning of a trend, it was more of an anomaly. So 46 seconds was the page load for that one step of this test. So here, I determined that it wasn't a real issue and it wasn't worth me creating a bug. But I was able to get to that answer very quickly and you imagine now, how daunting it would be to identify this type of issue as a manual tester, for example, or using a script based solution that doesn't have all that rich diagnostic data and infrastructure that comes as part of the package with mabl. But well, this, in particular, wasn't an issue. 

Through my investigation, I did see something problematic and that's that one page of the application actually loads more than six times longer on average, than the other pages of the application. So while I don't have an incident related to this release, I do see an opportunity for improvement and again, this is the type of proactive analysis and insight that our quality engineers are now able to deliver back to the team. So those are examples of how we can use data to improve our work and also how that test data. This diagnostic data starts to unlock possibilities for us to add more value to our teams by validating non-functional attributes of quality as well. But this all only really works when it's integrated into the way that we build and ship software as teams. 

Since the beginning, mabl has been well aware of the fact that if you don't integrate seamlessly into the processes and tools that software teams use, they're not going to use you. So from the beginning, for example, we've integrated with Slack. We've integrated with Jenkins, with Circle with GitLab, now GitHub, Azure, DevOps, and more. So in each of these cases, we endeavor to fit as seamlessly as possible into the team's workflow and pipeline. 

I'm excited today to introduce our latest integration. For quite some time, our enterprise customers have envied the rich integration that we've had with Slack, where many of them prefer to use Microsoft Teams. So today, we've made available integration between mabl and teams. In just a few clicks, you can integrate mabl and teams and receive notifications when tests fail, it's highly configurable. So you can choose where those notifications are sent and you can choose what type of triggers will actually send the notifications. Our customers report that this type of integration fits really well in a collaborative team workflow, where we want to have the same context around test failures, we want to be able to comment on the failures, take ownership looking into them in real-time, and more. So we're very excited to make this new integration available as of now. 

Over the last year, perhaps our most significant integration was with Jira Cloud. Now, Jira Cloud is viewed as a strategic platform for more than half of our users and more than 70 percent of our users take advantage of the rich functionality available in Jira Cloud. So this year, we launched rich integration with Jira Cloud that's on par really, with the integration that we already provided with Jira Server in the Data Center. So with minimal configuration upfront, you're able to, at the click of a button, create a bug, an issue in Jira that has all of that rich diagnostic data attached to it. So you imagine that there's a test failure, it looks like there's a bug, I click on the button to create an issue in mabl and it automatically creates that issue in Jira with the chrome trace attached with a snapshot of the DOM attached with a screenshot of exactly the state that the application was in when the test failed. This is very valuable in reducing the back and forth that often can happen between the developer and whoever is working on the testing at that time to replicate the issue and capture the appropriate information to speed up the diagnosis and triage. This integration is already widely available and we have many, many customers using it as of today. But it's also a significant step in our partnership with Atlassian. Because so many of our customers rely on Atlassian products. We've been working closely with the Atlassian team for some time. It's in that spirit of partnership that I'm so excited to welcome Gareth and Erica from the Atlassian team to talk more about some of the recent innovations in Jira.

Erica Sa  

Thanks, Dan, for the great intro. It is so great to be here. Hey everyone, my name is Erica and I'm a Product Manager here at Atlassian. I'm here today with Gareth to talk about how you can use data and insights to continuously improve your team, which we believe is how you enable high-performance teams. Now I know everyone here understands the power of data. It's ingrained in everything we do. At work, we rely heavily on all kinds of data points and metrics to make business decisions, whether it's about recurring revenue, product usage, or user retention. Then outside of work, we're tracking how well we're sleeping, how much we're exercising, or even what we're eating to keep track of our fitness and health and yet, when it comes to understanding our team's work, we tend to revert back to relying on our intuition. This isn't because we don't care about our team's work and in fact, we've heard from many of you that you do need help from Atlassian and answering the following questions around how's my team doing? Are we efficient in performance? What can we do to improve the work that is done here? But unfortunately, the thing is that these questions are surprisingly easy to answer and in fact, there are mountains to climb.

First, you need to know what to measure and when to measure them and once you understand this, you still need to figure out how to gain access to that data and start collecting them. Even then, you need to realize how to visualize them, and also work through and learn to get those golden nuggets, the insights from all of those data points, and then finally, this shouldn't be a first-time or one-time exercise. So working with data needs to become a habit. The fact that there are so many challenges for you to work through to use data is a big problem because you can't improve what you can't measure. 

At Atlassian, we're on a mission to help your team achieve this loop of continuous improvement without having to climb all of those mountains. We believe data should be right where you need it when you need it. This is why we built insights into Jira Software Cloud. Insights bring key Agile and DevOps data points to where your team works. So on the board on the backlog and on the deployments view. Today, Gareth and I are going to walk through how you can use these insights and other data features to improve your team's work. 

So let's start with insights in the backlog where your team plans a sprint and let's imagine for the audience here today that your engineering team has decided to run a quality sprint. There are teams for shipping features left and right moving really fast. But unfortunately, adding lots of quality and testing in the meantime. So this is an opportunity to make up for that until the next set of features is specked out and designed out for the next sprint. So to plan this sprint, the first thing you need to do is to understand how much to add to the sprint. 

For this, you can look at the sprint commitment inside here, which gives you a recommendation of how much work should be added based on the average of the work completed over the past five sprints. You can look at these little bar charts over here, where the colored bar at the end indicates how much is currently assigned to the sprint. The green ones in the case of how much was completed in the past. So if the workload is within the recommended range, though showing green, and if it's way below or way over, we indicate this in orange. So it's very easy for you to use this insight. So after the amount of work set, you can now look at the issue type inside here to decide what type of work the team will be taking on for the sprint. So maybe the focus of this sprint is to increase the percentage of test cases your team is automating. So the majority of the work here is going to be about adding an updated test coverage. Of course, we understand that it will be way more valuable if we displayed here the actual test coverage data or software test automation goals. Personally, it's not here yet, but it's something we'll consider in the future. 

Now let's say your team has kicked up the sprint after the planning and it's been a little over a week. Most of the team is now working on the board to track and move the work forward. So here on the board, we also have a good dose of insights that will be perfect to review during your team's daily standup. So looking at the sprint progress inside of the board, you get a sense of your team's status in the sprint. The burndown insight will give you an even better idea of the progress and also it shows you where changes in scopes are happening.

Let's say for this sprint your team is running behind according to this chart, your team can review the prioritization to ensure that the most important quality work that you absolutely want to make sure gets done in the sprint is it gets done before the future work except in the next sprint. The sprint burndown is unfortunately available to only a small group of alpha customers today, but it will be open to everyone in a few months. Next, if your team is wondering how you're tracking across different goals, epic progress inside is the one to review. So this insight shows you basically how much work in the sprint is assigned to different epics and also visualizes their progress. So the color green means done. Blue is in progress and green means not sorted. So you can look at this insight to regularly evaluate the priority of the goals and how much of the effort needs to be dedicated to each stream of work based on your team's progress and the work's priority. So far I've shown you how your team can use insights from the board and backlog for running a quality sprint. We all know the quality sprint is in the service of increasing the team's velocity. So now, Gareth will talk about how we can also help your team move even faster with DevOps metrics. 

Gareth Wham  

I'm Gareth. So as a team is shipping value, they're able to track the delivery of that value via the deployments view in Jira. But alongside that, we have two DevOps philosophy metrics in cycle time and deployment frequency. Cycle time measures the time it takes from the first commit through to that code running in production. Deployment frequency measures how often you're deploying to your production environment. With these two metrics, teams can get a sense of how often they're shipping that value, and how long on average it takes to ship it. Recently, we have introduced full-page reports for these metrics. In cycle time, we're able to show you your team's cycle time week over week, and how each week is performing against its rolling 12-week median. 

As you scroll down the report, we have another chart that shows a snapshot of the week. In the snapshot, we can see which issues exceed the 12-week median, but we also provide more detailed information about what constitutes a particular issue cycle time. For example, we can see how many PRs and commits have been made against a particular issue. We've also calculated the amount of review time for a given issue. This helps the team start to identify bottlenecks in the delivery process, and an indication of why those bottlenecks may have occurred. In the deployment frequency report similarly to cycle time, we show a weekly breakdown. 

We also have the 12-week rolling median to allow for weekly comparisons. Below that the deployment frequency is shown broken down by environment. This is a similar mindset to what Dan touched on earlier in regards to understanding specific testing environments. So if your team is only able to deploy as far as staging before handing over to an ops team for the production deployment, then you can track this over time from this report. As we scroll down to the bottom of the page, you can see a weekly snapshot. This shows the average number of issues per deployment, or what we refer to as batch size. The aim here is to help the team understand how their batch size is performing, whether that's increasing or decreasing, or even staying consistent. Ultimately, though, the goal here is to reduce deployment risk by reducing the number of changes that are happening at any one time. Today, we have a core set of experiences and insights available across key Agile touchpoints and key DevOps areas, such as plan, track code, and deployments. 

You'll also notice that we have the same familiar integrations as provided by the mabl platform. So we're in good company there. In the future, we want to expand on this to include all the experiences and integrations that software teams use to get their work done across the entire software delivery lifecycle. Or then use this information to derive meaningful insights and metrics. Some key call-outs here are that we will explore testing and observability and all of the key tools that are used in their specific categories. But this is all in service of providing critical information to teams that help them deliver better software. I talked about adding new experiences and integrations across the entire software delivery lifecycle. 

These integrations enable us to provide new capabilities as well as metrics and insights to software teams. Secondly, we want to provide smart features using this integrated data set and the core agile data that is at the center of Jira today. We use it to do things like suggest the next best action a team can take to remove a bottleneck, or perhaps make predictions about whether a team's cycle time is likely to increase based on the current velocity. Finally, we'll offer custom reporting to allow teams to bring together all of their data across their entire toolchain to construct reports that matter most based on the data that they care about the most. Thanks for listening today. Hopefully, this gives you a sense of how your team can improve their work and delivery using data and insights across the entire software delivery lifecycle, and how Jira Software is becoming a critical platform for software teams. That's it from Erica and I. I want to thank Dan and Leah and the team at mabl for the opportunity to share our story with you all today.

Dan Belcher  

Thank you so much, Gareth and Erica, really appreciate you joining us here today and I have to say, I'm very excited to get our hands on the new Jira insights capabilities for use internally at mabl not to mention the incredible integration opportunities ahead of us. It's always great to work with the Atlassian team, because teams are at the center of everything that they do and I think if we've learned anything over the last couple of years, it's that, for this to work, for DevOps to work, quality has to be treated as a team sport. We do see patterns emerging across our customer base in terms of their design of teams. One pattern that I see more often recently is that the teams will have kind of two roles or quality engineers will have two roles. One will be their embedded role. So they're embedded within the Agile squad for sprint work, meaning they're partnering with the developers to add and extend test coverage to validate changes as part of, for example, creating or launching a new feature. But then, as Erica alluded to, there's a notion that there's work to be done centrally to improve our quality engineering posture overall. So we'll see those teams sometimes they're dedicated central teams, sometimes their virtual teams that will have their own sprint's outside of their product sprint's to improve the quality engineering posture, whether that's addressing, for example, flaky tests, as I highlighted earlier, whether that's improving our test environments, improving our test data, working on the process or otherwise. So at mabl, we're committed to supporting both sets of processes in sprint product development, and also what we're calling quality engineering improvement. 

This year, we've also started to spend much more time thinking about the processes from a management perspective. What are the dashboards? What are the reports that we need to provide for our teams to provide to their broader organization and leadership to help them make the right decisions? One of our great customers SmugMug has talked about this transformation in roles that they've undergone over the past couple of years, in part enabled by mabl, you'll hear Janet in the panel later today talk about how they've transitioned from manual QA to using the same people to automate tests, to become test engineers. They'll talk about how quality assurance used to be a stage in their SDLC and now how quality is pervasive across the entire SDLC as opposed to being a discrete stage. In a similar sense, Janet will talk about how they used to be in silos where there was a quality assurance silo and a developer silo and now the teams are much more integrated. 

I think this is exactly the type of transformation that we want to enable so that we can ensure that quality does become in fact, a team sport where everyone is participating. We see much more innovation on the horizon. Both in terms of how we can use the data that we capture in tests to provide more insight into the customer experience. You'll see more innovation from us in terms of extending the testing to new interfaces. But you will also see us continue to invest in that job one, making it as easy as possible for you to create and maintain reliable end-to-end tests. With that, I'm excited to welcome John Kinnebrew, one of mabl’s machine learning engineers to talk about new innovations that we're about to deliver to improve timing in automated software tests. Welcome, John.

John Kinnebrew  

Thanks, Dan. As we started testing our new unified runner across a variety of different applications and environments, it quickly became clear that all that work we've done to improve the performance and speed could be a double-edged sword. For example, if you have, say, a QA environment that runs slower things take longer to happen than in your production environment, mabl could actually run too fast and try and interact with the app before it had finished processing the last action. 

We introduced this notion of interaction speed and control, where you could tell mabl essentially, this QA environment is going to need to have interactions slowed down for it to be able to handle it. But if you look at what actually happens in a test, for example, this is a mabl on mabl test where we're testing our own results page. First, we go and click this failed filter so that we can get failing tests, and here, we actually do need to slow down and wait for the app to react. Ultimately, we're going to get a list of failed tests and we can move on and assert about these individual failed tests. But if we look at what's happening immediately after selecting this filter, we've got three states at the bottom of this graph here for Chrome Dev Tools. Initially, we're still showing those passing rocks. So if mabl tries to click into one of these, at this point, then we're actually going to fail the test. Because we went ahead and did something before the app had actually reacted to our last action. So we need to go ahead and wait and next, there'll be a spinner and then finally, we'll show the failing runs. 

We need to capture this information about the timing and the state changes here in order for mabl to wait intelligently. Fortunately, we have a lot of infrastructure and processes running in the background in real-time. So every test is being analyzed and we're building models of the elements and the individual apps and environments so that we can intelligently find the right element and so that we can do things like auto-heal when the app changes. So now we're incorporating more of this timing and very early on initial state change behavior in the app so that we'll have a model of when, and how we can tell if the app is actually ready to move on. Of course, this is happening in the background as tests are running in the Cloud. So these models will continue to be updated and optimized. So every test step can actually run as fast as possible. But we can wait until the apps are ready to ensure that the tests are still running correctly and reliably. So we're excited for this innovation and intelligent timing modeling and with that, I'll hand it back to you, Dan.

Dan Belcher  

Thank you very much, John. I hope this makes clear that we're committed to innovating not only in the area of using data to improve our quality engineering posture but also continuing to invest in that job one, making it as easy as possible to create and maintain effective test coverage. This is all in service of delivering transformational outcomes to the business. I hope our talk today has triggered some thinking in you about how quality engineering enables the key transformations that we're looking to achieve across the software industry, whether that's digital, DevOps, or technical. It's really focused on delivering outcomes for our business. 

I'm always thrilled to see when our customers over the next two days, you'll hear them talk about not only their team outcomes in terms of better test coverage, lower effort to maintain tests, and so forth, but also tangible outcomes that affect the business. They'll talk about a 50 percent reduction in defects in prod. A higher-quality product is being delivered to end-users. They'll talk about being able to reduce spending on test maintenance and shift that spending to other areas. They'll talk about improved agility and throughput delivering more capability faster to their users. 

We're just beginning to start to see data on improving the customer experience. I expect next year to be talking a lot more about customers achieving dramatically better application performance and higher customer satisfaction as a result. Also next year I expect us to be talking a lot more about our customers being able to reduce accessibility issues in production, or even get full coverage over accessibility for the first time. So those are the types of outcomes that we're focused on now. The mabl team is here to support you in achieving those outcomes. If you didn't know, support is a core value for our entire company. Of course, it starts with our support team and if you've had the pleasure working with that group, you'll know that support really, that idea runs in our veins, and it proliferates across our organization, whether you're working with our sales team, our solutions architects, our marketing team, customer success, engineering product or otherwise, we are here to support you. So please do not hesitate to reach out with questions or feedback anytime. With that, I want to thank you for joining once again the events. I hope you have a great couple of days. Thanks again.