We know the customer experience is paramount in today's digital economy. However, our traditional approach to quality assurance - which typically involves testing right before production - isn't enough to keep up with today's fast pace of development and high expectations from users; we need quality engineering. But what does QE actually mean and how is it different from quality assurance? Darrel will present shared definitions and a framework you can use to assess your organization's QE maturity. You'll come away with an understanding of modern quality practices and a roadmap to adopting it in your organization.

Transcript

Darrel Farris

Hello, everyone, and thank you for joining today. Today we're going to talk about the journey to quality engineering adoption. Maybe just start off as a bit of a prequel. About 2500 years ago, Heraclitus noted that everything changes and nothing stands still or paraphrase more commonly, as “change the only constant”. 

That was a very long time ago. But as we heard in today's keynote, and anyone that's worked in tech for more than a few years is aware, his observations remain true. Change is all around us and always will be. The challenge for us as workers, technologists, teams and humans is how we respond and how those responses evolve. 

For those who don't know me, hi, my name is Darrell Faris, I manage our Solutions Engineering and technical account management teams at mabl. I've been here for around four years. But I've worked in the tech industry for over 20. And about half of that, or more, working around themes of quality assurance. And as it says here, I love working in this space. I love testing. I love testers, I love teams who care about testing. And part of my mission here is to help elevate and empower those individuals. 

A little bit about our agenda today, we're going to talk about the shift from QA to QE. How we can think about quality engineering in terms of maturity and capability, some potential next steps for you and your organization's in your teams. And then of course leaving time for some Q&A.

So transformation, digital technology, DevOps, agile, and maybe perhaps to a certain recent pandemic. It's not just our organizations and approaches to building products that are changing, but our interfaces with those are continuing to change as well. Like every everyday is bring your kid to work day now right even at its kind of most base level. 

Organizations that have been hesitant or resistant to shift from on-prem to cloud. Things like that are now more open to such change, perhaps because of necessity, problems or situations that we could work around in the past are now being seen in a new light, and perhaps a more approachable and solvable, such as this one. 

For a long time, I think QA has been perceived as a bottleneck. We have done so much work through DevOps through automation to make it so much easier for developers to build and ship code faster. Our ops teams have also evolved on the other side, our infrastructure has certainly evolved. You know, in the old days, if we needed a new server, we had to pick up the phone and call Dell or somebody and have something physically delivered. And then we had to install it and wire it up, probably in a data center and the back of the office. How things have changed. 

However, I think we're still working on solving ways to resolve this QA bottleneck. And I think we might be onto something here at mabl. This is essentially what our ideal state should be, where our systems or processes can handle higher rates of change. And as I mentioned, DevOps has done wonderful things for, you know, getting our ability to push more out. But we still, you know, have some challenges in the middle. What we're endeavoring to do here at mabl and I think many of our attendees are trying to do is find ways to make testing, not only more efficient, but more meaningful and more impactful. 

Quality Engineering, in my mind is really a state of being, I think, over the last 10-15 years, right, everything starting with the big shift to Agile themes, like shift left, things around different modes of testing, there's been a lot of experimentation in different organizations, you see, you know, kind of new messaging coming out from thought leaders in our space. The way that I think about quality engineering is really that it's a state of being, and I'll show sort of a model, I think, we’ll enhance this. 

But what I think of it is, all of these great practices and advice that we've been learning over the last decade or so, and increasingly, so in more recent years, when we start doing all of those things, and we really start living that out on a day to day basis, within our teams, within our workflows. I think that's when we really can say like, we are practicing quality engineering, and that's what I mean, with it being more of a state of being. While transformation is all around us, I think it's very important to say, Yes, quality is often at the center of these various transformations. But quality is not the only thing, right? There are lots of things surrounding quality as well. Some of that is our behaviors, our processes, the people in teams that are executing against these new strategies and new processes, new moods. And then, of course, culture and technology, which can both be an enabler and a result of the things that we're doing. 

And it requires change. Change is often scary, you know, it is, you know, as trite of a cliche as that sounds. But it's also necessary. And it's up to us, the folks who were producing lots of software to learn how to embrace these various changes and incorporate them into our workflows. So, you know, it's wonderful to say, yes, we're moving to the cloud. That's awesome. However, it also requires, you know, some other things around it, right, we need the right technologies, we need the right processes in order to have good modes of releasing software. 

In the old days, I remember hearing feedback from some of our users, that monthly updates, you know, once we had, you know, built in an automated build and deploy system, right, monthly updates, were starting to get a little scary. And it wasn't necessarily because of the frequency of those updates, it was about the risk that those updates were bringing on. So for example, right, you install and update, and new things break. And that's scary. So I can recall, you know, folks pushing back and saying, Oh, my gosh, could you you know, maybe go to a once every six month kind of release cycle? We can’t handle, you know, these monthly updates. And now, of course, you know, fast forward to 2022. 

Here at mabl, we push to prod multiple times a day, many companies do. We've been talking about continuous deployment, continuous delivery for a number of years now. And they're wonderful things, but they require other disciplines below them in order to enable them. Same with people in teams. I think agile was probably one of the most disruptive, in many shades of that word of I remember, you know, early on in the days no one trusted, agile, you know, you mentioned that we're going to start you know, practicing agile, everyone bellyached about it, you know, the meaning overhead, all of these things, no one had experienced quite yet the benefits of it. And we had to go through that and learn from it. 

Nowadays, agile testing is kind of table stakes par for the course. Most teams are doing some form of agile, you know, even if it's just stand ups, right? We're taking practices out of that and using them to center our teams and choose what to deliver and how and when. And same with culture and technology. Culture, in my mind, is often an outcome. It's very hard to build culture intentionally. Culture is typically a result of all of the other things you know, that are happening around. Technology is both an enabler and a disabler. Certain technologies, right? Think CI/CD, DevOps. One of the first things that I automated in my career, or think right around 99, or 2000, was an automated build and deploy system. And there wasn't anything off the shelf that we could really use. So Darrel learned Perl.

Interestingly now, right, there's no shortage of off the shelf products that we can use to stand up a system in less than a day. Technology. And that aspect is an enabler. For companies who are undergoing some sort of transformation. Technology can also be a limiter. It's very difficult and challenging, especially in certain industries, particularly regulated industries. To embrace the cloud, for example, right, the shift from on prem to cloud is a really big shift for lots of companies. Some of that has been necessitated by everyone working from home for the last two years.

Companies that were even averse to the PMs now had to figure out ways to make all of this work. And I remember early on in the days, even with Zoom, I'm sure as many of us are familiar with Zoom had a lot of challenges, being stable in the early days of COVID, once all of a sudden, not only us as workers are working from home, but our children are also attending school, and having meetings, things like this, that put a lot of weight on their servers, and they had to respond in order to keep that product viable. These are examples of the consumer trends driving quality engineering. And if I remember right, it only took them a couple of weeks really to get a lot of the major wrinkles ironed out.

And that's what I mean by you know, your technology can either be an enabler or disabler. Oftentimes, we have to shift from technology being limited in where we are, we have to embrace new paradigms, new tooling, new processes, and that can propel us forward. And customers value this right, we saw this slide earlier in the day as well. But I think it begs repeating, when our efforts towards building better quality into our products are really embraced at an organizational level, it yields positive customer outcomes. And we can see here, there's almost a two fold increase from customer satisfaction when themes around quality are actually valued and embraced by companies and for many of us have felt this viscerally as we're on our own journeys, which is why I love these sorts of customer events. I'm thrilled to be able to hear from the rest of our user base, you know what this has been like for them. 

So thinking about the shift from quality assurance to quality engineering, right? Oftentimes, quality assurance was something that happened after something else. And typically, that was something was deployed somewhere. And then we could, you know, stick a bunch of testers on it. And in more recent years, we've started trying to bake automation into our user stories. And I think early on, you know, folks, myself included, right, we needed some test automation as part of our definition of Done. And oftentimes, where that led was a bulk of automated tests that became too unwieldy to maintain. And so now we see a shift of automating things more strategically, based on different paradigms. Things like risk, things like now, customer experience, right? The talk after this one as a bit of a prequel is what I'm particularly excited for. Because I agree that it's today it's more about how our customers and how our users are perceiving quality and less so about our own perceptions of how we're, we think, our product quality sets, it's really about the folks on the other side

So what are some of these practices? You know, I mentioned that QA was often a process that happened after something else. Some might say that QA was an afterthought, at its worst. But now I'm delighted to see that we don't have to preach about quality as much anymore. I think that teams are really embracing quality as a discipline is something that we need to all be thinking about. And also something that needs to be built into not only our process and pipelines but in the way that we think about our products. So building quality throughout our software development lifecycle. 

Similarly we're not just focused on pass fail rates and other metrics or KPIs about our internal view of what product quality looks like. But making sure that our users are also proceeding with that. I wonder sometimes if one of the outcomes of focusing so heavily on shift left as a theme, you know, over the last few years or so, has, you know, sort of left our customers behind on the right. So I think nowadays, you know, more so in the last year, we're really starting to remember that, yes, we want to shift left, we want to get QA and testing, and all of those things involved earlier, and as early as possible within our processes. But at the same time, we can't forget about our users over there on the right.

Not only functional testing, which I think is where QA has really been focused up until very recently, for decades now, non-functional attributes of our products and services are increasingly more important. Non-functional testing areas to focus on include performance testing, load testing, and accessibility testing. We need to know as a quality engineering team, if we're developing software, if certain changes that we impart are impacting performance, lots of teams that didn't have access to that kind of data are good visibility into that kind of data, even if they did have it, particularly testing teams are often siloed. away from that. 

Nowadays, you see more visibility and right, we've heard the word observability. Quite a lot in recent years. And it's increasingly more important. It's increasingly more important for individuals to be aware of this and use that data in their decision making. 

Which brings us to the next point, right, using data to drive continuous improvement. What data varies across you know, your particular tool, chain, your vertical, but we're drowning in data, we have so much data now at our fingertips. And I think the challenge is really identifying what data we should be focused on and when to drive those decisions. And again, a lot of that is contextual. But as a theme, I think it holds true for all of us.

The Quality Engineering journey, I really like this view. But I want to put my own spin on the interpretation here. When I first started thinking about this, I was thinking about this as a linear progression. And I've come to realize or believe that that's wrong. I think that, you know, as we kind of move through these new modes, we have to take little chunks out of each of these levels, right, and you don't have to master one before you can move on to another. 

So rather than thinking of this as a ladder, or some kind of growth chart, I really see this almost like a Venn diagram with quality at the center of it. And so testing manually after development, right, that is, I think, something that we've been doing for a long time. And I don't think we should stop, I think manual testing will always have a place right how things feel is very important. And it's hard to get a sense of how things feel from the user perspective, just by looking at automated tests. But we should be automating things away. So our manual testing efforts can be more focused around the things that actually matter. 

Naturally, we want a high coverage of functional automation. I think that is a great outcome. I think coverage is a, you know, a word that is sometimes hard to perceive. Especially when you're looking at things more holistically. There's always more work to do. But we want to make sure that we have high coverage of the things that actually matter. Oftentimes, we think in terms of a risk based approach to quality. Oftentimes, what I found is teams that are practicing that, their risk is focused more on business things. What if something breaks or we're gonna lose money, if I'm an eCommerce, the worst thing that could possibly happen is that our payment system goes down.

The outcomes when we talk about those, though, are often in terms of what we can't get money. Sometimes we forget that our customers also can't purchase something. And I know for me personally, if I try to go buy something at target.co, for example, and something is wonky, on Target's website, I don't stop to send targets customer support a message to let them know that I encountered some sort of friction. I just go someplace else so I can get my task done. 

There's a great stat that we surfaced, and I believe it was the 2022 State of Testing in DevOps Report. But a certain percentage, I think it might be 32% of individuals will leave a product after a single bad experience. I am that user. And so I know there must be more like me. So it's really important to not only think of the benefits of test coverage from an internal perspective, but also how that yields sentiment on our customer side as well. When we do have good automation, we want to make sure that that automation isn't just happening right before we release. 

So automated testing being incorporated in development, right themes that shift left, we want to test as early and as often not as possible, but as a sufficient. And what's sufficient, you know, varies. Again, it's contextual. There's a very big difference between someone producing medical devices, my mom has a pacemaker. And there's an app for that. And if something goes wrong with the software that is managing her pacemaker, she could potentially die. That's a lot different than a button that is blue, when it should really be green. So again, it's about risk not only internally, but it's about risk to our customers, we want to make sure that they're having quality experiences, and that they're experiencing something that feels good and is working, and it's satisfying their needs. 

Non-functional quality, I want to spend a little bit of time on this, not only because as we heard earlier, mabl is starting some work on performance testing, right? Performance testing  is such a great non-functional aspect of our products to be focused on. But there's also a theme of accessibility here. For those who work in health care insurance, especially, we're more aware that laws have changed or are changing. And companies are getting sued, because of non-functional quality issues that are related to their user base again, right. So we have to be kind of winding our perspective on what quality is quality isn't just if I click this button does the right thing happen.

Quality is also making sure that when I'm pressing the tab key on my keyboard, that the elements are getting focused in the right order. You can do automation for that. But oftentimes, right we have to work with other interfaces, screen readers, different peripherals, in order to get the fullest sense of what that's like. UAT or acceptance testing, still has a place as well, right. And that's, you know, what we definitely don't want manual testing to go away. We just have to think about it a little bit differently. And as we're doing these things, whether simultaneously or individually, we have to be measuring. So we know whether or not we're on the right track

As I mentioned earlier, we are drowning in data. Everyone has tons of data about how folks are using things, we have so many new tools that can track users' path through applications where they might be hitting stumbling blocks. And it's information that we wouldn't have otherwise, again, not everyone reaches out to support every time they have a problem. So we sometimes have to look at data to discern whether or not there might be a problem. There are tools one of my favorites is rage clicking, you know, when you detect that a user is just, you know, clicking repeatedly, somewhere on the page trying to get something to happen. I found myself doing this just last night and it turned my frustration into a little bit of laughter. But that's my awareness. 

I think when your users are rage clicking on something, definitely something that we want to pay attention to and understand why. What are they expecting to happen and should they be expecting nothing to happen? And if not, what do we do about that? So thinking about where you are in your journey, thinking about all of these different transformations, the aspects that are driving us towards these vary completely. Again, some of its dependent on verticals, some of its dependent on our own maturity as a particular company or organization, right, where are we in our growth? And what can we take on and what's still a little bit farther away. 

So, you know, oftentimes there are compelling events that drive these changes. So If I were you, I would ask myself, like, what, what are our compelling events? What are my compelling events? As a, you know, if I'm a lead or a manager? What do I see happening around? And how can I best enable my teams and my individuals to meet those particular challenges? Again, it's completely different for other people. And I hope some folks are thinking about this for the q&a session. So where to begin? 

Everything is a journey, right? I think everyone should take a moment to pause. Take a breath. Think about where you are today. Where's your company? Where's your team? Because the places you are may differ across those. But problems really can't be solved until they're quantified. One of my favorite questions as a tester is what are we testing here? What problem are we trying to solve? Sometimes the answers to those are unclear. And if they're unclear, often, that's, you know, I think that's an opportunity to start bringing in more folks to the discussion and start winding perspectives and seeing if there's new and different insight to be gained from those. We have to understand what our teams are really good at and what they're not, and where there are opportunities for growth and development. If change is constant, I think that also, you know, it's not only external change, that sort of implies that we also have to be changing to meet those challenges. So understanding where the right change needs to happen is a really important one. And that should be a collaborative effort. Absolutely.

Does your technology support your goals? This is an important one, especially for teams that are undergoing some sort of digital or technological transformation. Oftentimes, the old ways of doing things were good, but in a different paradigm, they cease to be as good. And sometimes that means bringing on new tooling, new bits in your tech stack to rise to meet those challenges. I have a feeling a lot of folks in our audience today. We’re trying to use other tools before they approached automation. I myself came from the Selenium world. Part of my backstory is that I evaluated mabl and my previous company, when we were building a new team to test a new product. And I had been burned so many times or kept having to solve the same problems with Selenium that I was really just looking for some new approach, something different. And I learned from Joe Colantonio, test talks podcast that there was a new product called mabl. And I reached out to do a trial and had a great experience with the team. At that moment, I had been working on a test that I could get working in Chrome, or I could get working in Safari, or I could get working in Firefox, which I had to test because lots of Europeans use Firefox. And so it was part of the cross-browser testing work. And so I could get it working in one browser, but never the others, right. And all of these bits of Selenium have Chrome drivers, Safari drivers, i.e. drivers, all of these different bits that have to talk to one another. And couldn't get it to work on all three. 

It was perplexing, to say the least. I had that test built in about 10 minutes in mabl. I had my DevOps person sitting in the seat next to me, because we were a cross functional team, which is wonderful. And he was like, “Oh, well, let me see if I can wire that up”. You know, and another 10 minutes later, right, we were running a build and then we kicked off the test and mabl, it felt like magic compared to the other tools that we were using. And that was compelling. Because we were building a brand new product from the ground up and we really didn't have time to suffer through you know, these weird problems that were really really challenging to solve. You know many problems I think have been solved. But again, this is different for every team, different needs. So we needed different tech to support our goals. And I think lots of your teams did to where do you need to make an impact? Oftentimes, this is a few places, right? And I think the challenge then comes with, how do you approach this? You know, where do you start? Where do you begin? So thinking not only, what problems are you having right now? But which ones do you really need to solve? And in what order? Is there an order? 

So idea time, thinking about the different levels on the growth chart, I decided to skip over the testing manually after development. Because most of us, you know, have been doing that for a while, even if we have no automation, and we're doing things manually, we typically had to wait in order to have something to test. So thinking about high coverage of test automation, enablers for that, quality, in general, for this to work, quality really needs to be embraced at all levels, organizational, team, individual contributor, the activities that we do and testing, I think this is where lots of us get bogged down. Our activities and testing need to be manageable, maintainable, and meaningful.

So how can you start incorporating some of this? I think talking is a really great way to start solving any problem. So testing and quality are commonly discussed. And commonly I mean here is that it's just part of the vernacular, if we're in, you know, a session where we're writing user stories, I don't want to hear just what the thing is supposed to do. I also want everyone to talk about how we're going to test it. Small changes in dev can sometimes bring on the need to do a lot of extra testing, the testing in the scope of change isn't always a direct correlation. So we have to be mindful of in-sprint testing, and how that might impact how we choose to deliver certain bits of functionality. Dev, thank you so much for doing lots of unit testing, and really embracing that discipline 10 years ago, right, you know, everyone's like I don't need to write unit tests, you know, that's not worth it. They don't catch anything. I think perspectives have shifted there. And it's not just about TDD, or things like that, it's you know, you don't necessarily have to be practicing TDD in order to write unit tests. I think they often, you know, complement each other quite nicely. But having the discipline to do that is a great place to start. 

Think about the automation pyramid, right, that is our biggest chunk, and the base is really around unit testing. Thinking about risk again, right? Not only restore our businesses, but risk to our customers. Those aren't only business risks to our customers, those are also sentiment risks to our customers. You know, thinking about the broken windows theory. If we just have a bunch of visual problems, right, I've worked on teams where it's like, oh, it's just a visual thing, we're not going to fix that. Or we're going to deprioritize that in favor of something else. Oftentimes, visual things are generally fast fixes, right, they typically come in as a result of some other change. 

When you have too many of those, your customers start to think that someone's asleep at the wheel. And that is something that we really don't want our customers to think. We want our customers to think that they've bought into a solid team that they bought into a product that is tech forward, that is resilient, that can handle change. If the customer starts thinking that the team that's building the product isn't capable of reacting to change, that's pretty scary. And this last one, thinking about testing, being maintainable, one of my mantras is everything we build, we have to maintain. So we need to be mindful of, you know, the quantity of the tests that we have in our stable and letting it be okay to let some of those go sometimes if they cease to be as valuable. The nice thing with tools like mabl, is that it's so fast to create tests compared to other modes. So it becomes a little more palatable to say, we don't need that anymore. Let's let it go, you know, go away for a little bit. And then if we need to pick that back up. Perhaps we need to refactor a few things. They're oftentimes, you know, not a huge hill to climb. So sometimes we need to be okay with letting things go in the sake of having things to be manageable and maintainable.

Testing and development throughout the development pipelines, we definitely want to do that I think we're some some teams get tripped up is what they should test and where and when and how? Can’t answer all those questions in this talk, because again, it's very contextual. But if we have, you know, 500 automated tests, you know, that we run that's part of our regression suite, we don't need to run that on every commit. We want to run certain tests on commits, right? Why things like smoke suites. So testing in stages, and testing the right things at a particular stage is oftentimes very important.

One of the chief reasons why we want to test earlier in development in quality engineering is because it's a lot easier to fix problems early on, before we know about it before we start building too many trappings around it. So thinking about where you can start with some of these, or even where you can amplify some of your existing efforts, we have to have automation integrated in CI/CD pipelines. In my conversations with customers who are new to this, oftentimes, they think they need to build up a big suite of tests, for it to be meaningful to be run in pipelines. And oftentimes, what I say is just start up with one, wire one test setup doesn't matter what that test does. 

It's similar to the notion of you know, if I'm a software engineer on the first day of my new job, the goal is to commit something to the repo. And oftentimes, that's not because we want you to have some meaningful output on the first day, it's nice. But we want to make sure everything is wired up to support your ongoing work. And I think it's the same thing with getting testing into a pipeline, build some simple smoke test, right? Is it alive? Wire that up, get it to run, it builds inertia, it gets people thinking, Oh, well, what other tests could we incorporate into this? And that's the kind of thought patterns that you want to start nurturing across your teams, right? What else can we do here? 

Organizing your testing efforts, your plans, your suites, in a way that focuses your testing is really important. Again, we don't want to test everything on every commit, we want to test strategically, we want to test mindfully, when it has different things at different levels. Sometimes we want to test the same things in different ways. You know, right, we have tons of APIs and micro services, I oftentimes think it's more useful to and I think the automation pyramid agrees with this. We want to test our API stuff really, really heavily. Because oftentimes, when we're in those kinds of environments, our front end is just a consumer of those. Now, oftentimes, our front ends are doing a lot more work. These days, we have tons of client side processing, and we have to test and those bits a bit differently as well, sometimes integrated sometimes in isolation. 

But we need to test in a way that gets us the insight, most efficiently. Testing within the development workflow, that's not just automated testing. That's also you know, clicking around and making sure that the thing that you're building is good. And mabl right, we let you not only run your tests in pipelines, but you can also run them against your local dev environments as well. We want our test running against local dev environments, that's about as left as you can go. Apart from the requirements and design phases. But again, right, we want to start testing as early and as often as make sense.

This last one, I'm delighted to hear people are starting to pay more attention to their test environments now. A lot of times, and I've heard this from so many people, their testing environments are flaky, their test environments are up and down. Their test environments don't have the right data. And ultimately, that makes them less of a valid platform to be doing that testing, right. We don't want to test as a matter of course, we want to test in a way that's meaningful. And if our environments are not reliable, what can happen is that teams see a failure. They say, Oh, it's just an environment issue, push it out. And then, you know, we realize, oh, yeah, maybe there was an environment, sure, but there's also this other problem, and now it's in production. And so, you know, do we roll it back? Do we push out a hotfix? How do we address this? The places that we do our testing are just as important as production if we want to get meaningful results out of them. So I'm delighted when I hear ops team saying, oh, yeah, we're gonna start, you know, monitoring our test environments and looking for issues there. And making sure that they're just as solid as our prod environments, the right thing to do. 

Non-functional testing in quality engineering is moving beyond just functional, focusing on things like customer experience, performance, accessibility, look and feel, feel, especially the entire customer experience, everything from sign up to regular usage, to you know, destructive actions as well, how can we delete something? How do we get rid of something? How do I clean my slate, so I have less things to focus on, right? The acts of creation are oftentimes most focused on.

But there's more that we do on our products. So we have to make sure that everything that a user might do is happening in a good way. And it's happening in a way that feels right to them. So what can you start with here? Look at software test performance. And think of that in the scope of your releases, right? Are we improving things? Are things stable and neutral? Are we improving? Stability is not a bad thing in quality engineering. If you can push out lots of change, and your performance levels are staying solid, congratulations. If you're pushing out changes in your product and you are starting to have a degraded experience…please slow down and look at that. Your customers are experiencing that, too. It's not just data that you see. Those have real tangible outcomes for our users. 

Think about tests that might be leading in efficiencies, are we testing the same thing multiple times? Is there, you know, too much redundancy in our test suite? Are our tests structured in a way that is slowing down our pipelines? Right, everybody wants fast feedback. We also want the most meaningful feedback as well. So there's a balance to be had there. And we have to incorporate things like accessibility testing and performance testing into our regular quality engineering efforts, and making sure that the data from those is being surfaced to not just surfaced but surfaced to the right stakeholders and the right people. 

Oftentimes, it's not just engineers who are concerned with performance, right. It's also not just application performance, the places and modes that our apps are hosted on, right, we have a huge infrastructure component now. There's a big difference in deploying an application to a tiny instance of something and a really big one, we have to find a balance there. It's like cost, performance, things like that. But it matters. And it's not just application bugs that can, you know, create some friction for our users. Sometimes it's in our info. Accessibility, all kinds of users want to use our applications. You might not know that. But oftentimes, once teams realize it becomes interesting. And my last company, same company, where I evaluated mabl, we learned we had a blind user who reached out to support and said, you know, literally, “Hi, I'm a blind user. And I use a screen reader. And it looks like you have, you know, some things that are throwing my stuff for a loop. Do you have a UX team? Could I help? I really want to use your product.” How lovely. And the entire team was genuinely excited to receive that message. Because, you know, we worked on a music product. So we wanted people who had different capabilities to be able to make music, right? Everyone wants to be creative. No one wants to be the, you know, the roadblock to creativity. So we seized on that opportunity, even though we just knew about one user. And then once we did that, then we started learning that, oh, we also have other blind users who are doing the stuff as well. So it's a great insight. And no one was thinking about accessibility before that, and a lot of us think about it altruistically. But we didn't realize that we actually had real users that were experiencing pain around this. And so we addressed it, then you have to know that they're out there. 

Then thinking about continuous improvement or quality engineering metrics. So, yes, we want to make sure that we're using data to drive decisions. But those decisions are varied. What are we investing in, in our product? What do our customers want? What's their sentiment? What are they telling us? Data is not just numbers, data is also information and words. So it's not just data that can be cleanly displayed on a dashboard. We have to sometimes go seek out and manually grab data as well. So you know, UX team is doing user research and feedback sessions. We have folks look at prototypes, you know, telling us if we're on the right track from the human level, as well as you know, things like performance, you know, things that we can, you know, sort of measure programmatically.

We want to make sure that our business critical functionality is well monitored and tracked. That's a great place to impart some automation, right? Not only is it alive, but can users do a certain subset of actions that are common across that user experience, a great place to start. We want to make sure that we're not only tracking quality engineering data, but we can view that in a historical context, can we identify trends? Can we identify patterns? Once we identify trends and patterns? How do we respond to that? Do we have KPIs around these things? If not, should we? So what can you do, Yes, we should grow and maintain test coverage. And again, don't be afraid to let some of it go, tap becomes necessary. real user monitoring data, this is such a treasure trove, I love tools, that let me see a path, a user takes through the application nine times out of 10, it is not what we storyboarded. So it's interesting, especially if you have a lot of freedom of movement in your application. It's a bit different if you have very prescribed workflows that you can't get out of. But again, oftentimes, we have a mixture of both of those in our application. So understanding how users are interacting and interfacing with those I think is really important.

Device and cross-browser coverage. Are we testing things in the right way? Do we know what users were coming in on one of the big insights at a previous company for me, you know, me and all my friends, you know, we were using Chrome. And we thought no one would actually use Safari? Well, lo and behold, you know, 70% of our traffic was coming in through Safari, and not just Safari, but mobile Safari. No one was testing that. And guess what there were mobile Safari specific bugs. So again, you know, it's inside its awareness. And the response to that was all right, everyone, you know, we didn't have automated mobile web testing at that time. So we figured out how to get everybody's individual phones to connect to a local dev environment. So we could load up the local app on the phone and start getting an early sense. We eventually figured out a way to do that through automation, but we had to start manually. And we started by shifting that left as well.

I don't want to look at my phone and see things you look at on your phone, takes a second. We need to, I think, look at application performance. But holistically, there are lots of different ways to measure performance. And I think the right way is, you know, often a mixture of those. But it's again, something that we want to see and we want to see trends in history to that and we want to be able to correlate that with the changes that we're pushing out, right? Are we impacting things? Are we importing performance regressions? Are we improving things? One of my favorite things in the last year, I guess at mabl was our shift over to Playwright. Transparent change for the most part to our users, other than the fact that they saw their test starting to run 40-50% faster. What a beautiful thing to monitor and realize and celebrate. 

Thinking about release coverage, performance and quality engineering…I think, you know, again, this is where users can either delight or be very nervous when they see that there's an update. And if you have a good discipline around quality, and you're doing good testing, right, users see an update available and they get so excited. If you have a bad reputation for quality, and you know users are like, gosh, can I just stall this update, you know, click the button and they're waiting for grief. That's not what you want. You want users clamouring for more value from your product not being afraid of it? So think about that. Think about how testing are the activities around that. that can really drive that. And it's again, it's not just testing, its the process and journey to quality engineering. How do we talk about this? How do we set ourselves up for good outcomes? That I think is one of the most important facets of this right collaboration? And that outcome, right is that culture of quality, that mythical thing that we're all searching for. And some of the components here I think, are good outcomes, right, we wanted to do smaller releases, so we have a smaller scope of change. If we have to roll something back, it's not like rolling back, you know, six months of development.

Collaboration is so important. I think we've all I think agile, the Agile transformation, was one of the things that made us realize that we all have to start talking more. And it's not just our individual teams, but it's our entire organization, right. Everything we do impacts others, we all interface with mabl in slightly different yet lots of common ways as well. And so our internal stakeholders need to have, you know, good confidence in the product, as well as our customers. And the way that we ensure that is that we all talk here. Our focus needs to be customer centric. I know when I started, we're very engineering, you know, focused. Honestly, there were some companies where I didn't know very much about our users at all, kind of didn't even come into the conversation. In more recent years, users are at the core of our conversations is what matters the most. And I think that's a good thing. 

Automation, it's 2022 table stakes, we have to have automation, whether that's automation, testing automation in our pipelines, automation, and data gathering and presentation, automation is good. Look for opportunities and embrace those continuous feedback and improvement internally and externally, please, we want to hear you know, our own sentiment, right developer experience is something that we hear more and more about. But our customer experience as well is super vital. Again, right? It's what our customers think that really matters most what their perceptions are, because that can ultimately determine the viability of your business. 

Data driven decisioning. Again, drowning in data, let's make sure that we're, you know, coming up for air so we can look at it and make decisions about it holistically. Again, talk about it, what does the quality engineering data mean? Are we sure that we're interpreting it in the right way? Do we have enough data and enough complimentary data to discern whether or not we're examining this and making decisions in the right way. Sometimes it's hard to know that you bump your head a few times, but worth fighting for. And again, testing throughout the development pipeline and focusing on quality, and not just testing.

Quality Engineering, in my mind, is a verb. It looks like a gerund. But there's a lot of doing in order to really achieve that. And you have to do different things in different areas, right, each of those focus areas that we looked at, I think it's good to bite off small chunks of those in unison, as opposed to going deep on one. Because I think all of them are important aspects. And the more that you chip away, at each individual one, I think the more proficient that you can become in quality engineering, and not just testing. And why again, this image on the right. This to me is freedom. When we can push code out and get it in the hands of customers within minutes, and they're delighted, they're grateful. And everyone inside feels good. Right releases are something to celebrate, not something to fear. We want those outcomes internally and externally. Right. You know, we brought up the stat  before you know customers leave after a single bad experience. Our experience as people who are working on these products is also very important. I was touched by Dale Cook from Stack Overflow's comments this morning that has teams just like using mabl. And if you like doing something it feels a little bit less like work. If you're enjoying the things that you do throughout the day, that's really important. I say lots of times, we oftentimes spend more time with our coworkers and colleagues than we do with our friends or significant others. So it should really be time well spent. So let's all do the things that are going to make all of this time well spent for us and our users. So opportunities abound, understand where you are, where you need to go right, do a little bit of a self assessment, trying to get some insight into what the right outcomes and right paths to those might be.

Think about people and process and technology and think about it collaboratively, right? You've got to get there together. Not just as a team, but also as an org as a company. If you need new tooling, go explore that. Not just in testing, but with other areas of quality engineering and software development. I came across a team recently at a conference, that's just getting a defect tracking system. Like they're, they're installing Jira, and they haven't had it, they've been doing everything on spreadsheets. I said, you know, it's gonna feel painful at first. But it's also you know, eventually it's going to feel amazing. Having tools that do a lot of work for you is great. Having tools to help you sort and organize in ways that are common, are also good. So, think about that. Data, it brings perspective. Oftentimes, we think we know something, we're not sure why we think we know it, but we feel like we know it, get some validation for that. And maybe that, you know, isn't just validation, maybe it's like, oh, I was wrong, I have a different perspective now. And you can use that to make better decisions and drive your products you processes to better places, collaborate, collaborate, collaborate, get the whole team involved, talk before you do, right, if agile was all about, you know, bringing more collaboration and you know, more folks to the table.

Quality engineering, it's like the Olympics version of that. So, all hands on deck. Key takeaways. Oh, look, there's my metaphor. Quality engineering in software development is both evolutionary and multidisciplinary, right? Quality engineering is not just an engineering or QA effort. It takes a village. It is evolutionary. It's, I think, if you remember those, those key points from the slide, right, a lot of these things are like, oh, yeah, we should be doing that. So a lot of this doesn't feel like it's coming from left field. It's more about how we do it and how we approach it. But remember, everything is really focused on customer experience. It's not just testing anymore. And I think that's one of the biggest distinctions between QA and QE. And it begs repeating, right? If testing was a team sport, QE is the Olympics. So all hands on deck. And let's try to get a few more medals. Thank you very much. And I would be delighted to answer any questions that may have come up throughout this. 

Zane Halstead

Awesome, thank you so much, Darrell, that was wonderful. The first one that I wanted to kind of ask is, I totally agree that a QA is important. But how do you get the development team to accept the change to QE?

Darrel Farris

I think it serves developers to be concerned with this, I don't know very many developers who enjoy fixing bugs. I think most developers enjoy doing customer facing work or not necessarily customer facing but working on product, working on features working on things like that. And when something gets kicked back to them, it pulls them out of their current work. It's context switching, and it just doesn't feel good. So I think developers should maybe take a more self serving view of this and think that, you know, hey, if we're able to do this stuff, right, the first time, that lets me focus on the work that I'd actually rather be doing, instead of having to keep you know, doubling back on things. 

Zane Halstead

Great. What would you say the biggest obstacles are to getting teams and especially stakeholders to make this leap?

Darrel Farris

I think the biggest obstacle oftentimes, and maybe this is, you know, a different way of restating the question, but I think the biggest roadblock Oftentimes is just the resistance to change and often that's at an organizational level. So I think that's sometimes where a bit of diplomacy comes into play, right, you have to be able to articulate specific problems, and be able to identify what the impacts of those problems are. And again, there should be internal and external facing impacts. And you ultimately have to build a case for change. If it's not obvious, if it is obvious, and people will still be resistant. start shouting. I think sometimes, you know, you just have to keep pressing. But you also have to build consensus, right? It's, you know, there's a, it's wonderful to be a soloist. But of course, it is a beautiful thing, when it comes time to start imparting change. So again, talk to people around you, maybe they're seeing the same things that you are, and maybe they can help be a good partner for moving the needle on some of these things.

Zane Halstead

Wonderful. Well, go ahead and ask one more. How did you develop a passion about quality assurance in QA? What was the icebreaker for that? Huh.

Darrel Farris

I think it was when I was at Microsoft, doing application compatibility testing, which isn't super exciting testing necessarily, from an engineering perspective. But I worked on Windows 2000. Lots of folks were up, you know, basically, I was testing upgrade scenarios from Windows 98, to Windows 2000. And then XP kind of came into the mix in the middle of all of that, too, or any rather. So for me, what really opened my eyes was, you know, I love installing and updating and nothing breaks. And you know, everything just is rosy. When I got in the seat, and I was you know, my scenarios were to install Windows 98 install, you know, 20 or 40 apps, set a bunch of preferences, use them, you know, for a little bit, upgrade to Windows and then see what breaks. I think it's been long enough that I can say a lot of stuff broke, you know, in the early days pre release. And it gave me an appreciation for how much work actually goes into this kind of stuff. It was something I didn't know. And so for me, it was. It was an epiphany that so much work goes into so many little aspects of building anything. And then you have to have good mindshare around all of these things. And again, it's approaching and holistically. And I've Yeah, I think that was the thing that just really opened my eyes to a lot of different aspects of QA. It's one thing testing an individual product or an individual feature. It's a completely different thing to test an operating system. And I don't know, I lost a lot of hair. But I think it was, it was really instructive. It was really instructive. And I think that is, yeah, maybe that's the thing that that really, I don't know, lit a spark under me. QA was always something over there until then, and a friend of mine basically was like, Hey, do you want to come do QA? I was like, I don't know what, you know, I don't know how to do that. And I'm like, I'll show you Okay. And so I you know, that's how I learned. That's how I came into QA. I was always doing back end server stuff before that. And yeah, and then once I started doing it, I kind of just didn't want to stop. And here we are today. So yeah, hopefully that's a good answer. Great question.

Zane Halstead

That's wonderful. Thank you so much. With that we are out of time.