UX Stories at mabl - Part 1

UX is something that any product-led team should be concerned about - from engineering to quality assurance to marketing. Company and revenue growth comes from delivering excellent experiences at every stage in the customer lifecycle, and that’s critically true once the user is in the actual product.

There are general challenges that are present in every redesign project and some that are unique to the component under work. So we thought we'd share a bit about how we at mabl focus on providing a great experience to our customers in a series of posts that will introduce some of our upcoming mabl app design updates, our process, the unique challenges faced, and how we approached them. 

The first post in this series will cover how we use UX design and research practices at mabl to give our customers a great experience. Stay tuned for more posts like this coming soon! 

 

Defining goals and priorities

The core goal of our upcoming design updates is to make the mabl app efficient and, above all, approachable. For example, our engineers have done a great job collecting a ton of diagnostic information from every step of every test run, but for new users, the wealth of information may be overwhelming. This gave us an opportunity to, as a team, work together to focus on our customers and provide the best experience for them. In future posts, you'll see updates to our test output page that we’re working on right now which will better organize all of that information. 

When we start new design work, we focus on evaluating what we have today and then defining what our goals are. This often involves getting feedback from internal and external parties.

Internal folks:

  • Product management: Customer usage metrics. Are there any areas where we can improve drop-off in usage? Are there areas of the app that seem underused?
  • Engineers: What are the goals of the design? Are there several goals? If so, how are they prioritized?
  • Customer Success: What are the most frequently asked questions stemming from the design at hand? Are there any opportunities for education that we can accomplish in the app?
  • Sales: Are there any opportunities to clarify how our solution solves a relevant problem of the prospect?

External folks:

  • Competitor analysis. How are competitors designing to solve the same problem? Why do they do it that way? What do they do well? What opportunities for improvement do they have?
  • User interviews. Hear directly from the user. Where are their goals misaligned from ours? Do we want to direct the user to fit our intended path or are their assumptions the better solution?
  • Customer interviews. What problems do they have with the design? Are those major problems or minor nice-to-haves?

 

The Testing Part

After thoroughly researching the problem and setting goals, we start iterating on a solution and testing it with users. This phase is yet but a lap in a longer race.

Keeping the overarching goal in mind (in our case, want to be as efficient as possible, while being as approachable as possible), we design a version, share internally, and evaluate it using the same methods as our initial review period. Then we share externally via usability testing sessions, gathering multiple users across different companies and roles to get an aggregate sense of how the design under review will hold up.

There are a few things to look for during these evaluations. You might need to validate or even update the original priorities or goals of the design. For example, you may discover new goals the more the design is shown to users and as you go along further down the design process. For example, when we started thinking about some improvements to our test output page, we learned that the core goal users have with this page was to gather information about a failing test. We also learned that a secondary user goal was to work collaboratively with their teams.

But don’t view new goals as a total miss on you and your team’s part. New goals are a great sign - it could mean that there has been improvement since the last design iteration because the initial problem-set is now less prominent with the new design. The user might be keen on getting their “nice to haves” communicated now that their “need to haves” are met.

We also look for whether or not users are able to quickly find the primary action on a given page. In a mabl user study focusing on our test output page, we might be measuring how long it takes a user to diagnose a test failure. If we start to see user preferences trending toward the new design, then we know things are heading in the right direction.

 

The do it all over again part

UX design isn’t a project that’s ever complete. There are always new features under development, designs get outdated, new challenges get discovered. At mabl, we focus on continually re-evaluating our experience in order to provide the best possible product to our users:

  1. Gather goals, priorities, and feedback internally
  2. Initial design
  3. Evaluate against priorities and goals
  4. Test against external users and customers
  5. Evaluate and/or update priorities and goals
  6. Iterate on design
  7. Test again against external users and customers
  8. Repeat steps 5-7 as needed
  9. Implement
  10. Track performance, identify improvements for the future

 

What’s coming next?

mabl is fortunate to have a team dedicated to UX at this growth stage. We follow the processes and principles outlined in this post to provide a great experience to our customers. 

We've got a lot of great design updates to the mabl app lined up and are excited to share them with you as soon as we can, along with their unique challenges. Stay tuned for more posts like these coming soon!

In the meantime, you can use a tool like mabl to continuously ensure that your web apps are working as you intended. mabl is a test automation solution that enables anyone to create user-centric end-to-end tests in minutes. You can try mabl for 14 days free right now by signing up here.