Cross-browser testing is the process of testing whether web-based applications will work properly in a variety of different web browsers. It is designed to ensure that applications being built in one browser are compatible with others. Cross-browser testing will identify if there are problems rendering content across different browsers - for example, a feature available in the most modern browsers may not render in older browsers. In short, it answers the question: “Do users see your web site or app the way that you intend for them to see it?”

While important to quality testing results, cross browser testing can be difficult to perform comprehensively. There are multiple devices to check, different versions of browsers, thousands of different screen sizes, and most devices run on different technologies. All of this variety means that it’s simply hard to execute cross-browser testing very well at scale. Comprehensive cross-browser testing means proper planning, the right testing tools, and a solid approach. Let’s walk through why it’s important, the most important cross browser testing best practices to consider, and how test automation tools can help.

Why is Cross Browser Testing Important?

HTML and CSS are standards which are generally designed to work in all web browsers. These languages are simple markup rules to tell a browser how to behave when it receives a set of instructions.

However, even while the latest browser versions are becoming increasingly more in parity as to how they render code, there are still many opportunities for incompatible HTML and CSS rules and frameworks to ruin your user experience. Discrepancies in the way that code is interpreted is particularly true of older web browsers. New developments in the code come out all the time, particularly with HTML5 and CSS3, however not all browser vendors have made the decision, or have had the opportunity, to adopt the latest web standards.

What this means is that exciting new effect that you created that works fine in Chrome may render slightly differently in Firefox, or Safari, or not at all in Internet Explorer. Yes, it is true that IE has been deprecated in favor of Edge (which has its own issues), but this does not mean that all users have stopped using it. In fact, if someone is using an older computer, there’s a high likelihood that they may not see all of your fancy new designs, or they may render in a way that looks ugly, or worse, is completely unusable.

It’s never a good idea to blame the end-user. You are providing the product, and there’s a chance that your users may not be able to use the latest and greatest version, regardless of how good it is. We need to meet the users where they live, and not expect them to jump through hoops (such as installing a specific piece of software to meet your needs). The truth is that if we expect them to do this, in many cases they simply won’t.

We do cross-browser testing to:

  • Identify broken functionality (the site is completely unusable)

  • Recognize usability issues caused by an incompatible browser or browser version.

  • Improve the way the web app or site renders the page to ensure a positive user experience.

The goals of cross-browser testing are one, to determine if the appearance of a web page varies from one browser to another, and two, to verify that the app under test actually functions as expected in all browsers being tested.

It’s worth noting that some applications are specifically designed to work in one browser or another. For example, many of Google’s applications are designed to run best in Chrome because it’s their technology. Similar preferences may be true for Edge or IE and Microsoft. Regardless, cross-browser testing remains important to ensure that your application is at least passable - maybe not optimized - for multiple browsers.

What causes cross browser issues:


Some browsers have bugs

No matter how well a piece of software is designed to be built, it is still always possible for browsers, and sometimes even newer releases, to not have had all problems weeded out during development, and bugs can make their way into production. As web developers, we do not have much control over the bugs within the browsers themselves; they are client-side by nature, and we cannot control what exists on a user’s computer. However, we do need to ensure that we have identified if there are any issues within each browser, that’s why we test, to ensure that our content is rendered in an acceptable fashion.

Different levels of support for technology

Browsers are not all made by the same companies, and for whatever reasons, they choose to follow or not follow official W3C guidelines for rendering HTML and CSS. Older browsers, for instance, may not be able to render some of the more advanced features available within HTML5 and CSS3.

Some browsers are designed to have less functionality

In some cases, it doesn’t make sense to have browsers be able to handle each aspect of code quite simply due to limitations in the devices for which they are being designed. Mobile devices can have less processing power than a desktop computer. As a result, the browsers designed for smartphones may provide slimmed down interpretations of HTML. They also may not provide functionality to handle some of the more advanced animations, which may not work particularly well in a hand-held device.

Cross Browser Testing Best Practices

Now that it’s clear why cross-browser testing is important and what common issues we face when testing, let’s turn to the best practices of cross-browser testing.

Initial planning

Before beginning any cross-browser testing, it makes sense to have a plan in place regarding the content of the site

Before beginning any cross-browser testing, it makes sense to have a plan in place regarding the content of the site. Determining exactly what content and functionality a site should have can be broken up into a few categories including

  1. What are the ideal features that need to be rendered?
  2. Which features fall into the “nice to have” category?
  3. What is the minimal acceptable functionality and display the site should render, in the event that someone is using an out of date browser?

Browser Engines

Every web browser runs off of a browser engine. These engines are designed specifically to transform HTML into web page resources that can be viewed by the browser. They are not standalone pieces of software but are core to how web browsers work.

There are a number of different engines and each one translates HTML somewhat differently. It’s not entirely necessary to know the details of each engine, except to understand that they drive how different browsers work.

There are many but the main ones are:

  • Blink: Chrome, Opera, Microsoft Edge
  • EdgeHTML: formerly used in Microsoft Edge; used in Universal Windows Platform Apps
  • Webkit: Safari, and all iOS browsers
  • Gecko: Firefox (and Thunderbird email client)
  • Trident: Internet Explorer (and Outlook email client)

It’s important to note that not every browser engine will run in every operating system. For instance, EdgeHTML will only run in the Windows OS, and Webkit will run on everything except Windows.

identify Which Browsers to test

There are many. In an ideal world, we would test all of them, but in the real world, it's best to narrow it down to a few, depending on the browsers and devices most of your users are on. There are plenty of production analytics tools out there (such as Google Analytics or Matomo) that give you visibility into which browsers are most popular among your users. You may assume that testing on both Chrome and Firefox is a critical path, but there may be a substantial number of users that are using Safari, Edge, and IE, or others, that may be important for your team to pay attention to.

Not only should you test multiple browsers, but you may want to consider testing multiple versions. For instance, mabl always runs tests on the latest version and version before that, by default. While it’s considered best practice for people to upgrade their browsers regularly, many simply do not. Some companies even limit access to specific browsers, or have a policy that dictates when their software can be updated and restricts their employees to upgrade their software on their own.

It’s best to divide the browsers you wish to test into three grades. These are below:

  • A grade: These consist of the top-notch, most common modern browsers. These are known to be fully capable. For testing purposes, everything should work perfectly in these devices.
  • B grade: These are older and less capable browsers. They are known to have compatibility issues. For this type of browser, it’s important to test to ensure that at minimum you are delivering satisfactory access to your content, and that the main functionality continues to work.
  • C grade: These consist of a set of browsers which are rare and/or unknown. You cannot test these, but you can include fallbacks to handle situations where they won’t render your content, much like in the B-grade browsers. For example, checks can be added to render a simpler variation of the site or redirect the user if the user's browser doesn't meet the minimum requirements.

To translate this into real world terms, as for a bare minimum, it’s wise to test on at least the last few versions of the following A-grade browsers

  • Chrome
  • Firefox
  • IE/Edge
  • Safari

Beyond this, you will want to ensure you can at least provide a minimal acceptable experience on B-grade browsers such as IE 8,9

Finally, you will likely want to make sure your site is accessible by meeting compliance guidelines outlined here: WCAG AA.

If you are planning on working with cutting-edge tools, it makes sense to be forward thinking, and you may wish to consider testing on several pre-release (i.e. still in beta testing) browsers. Here’s a sample list along with access to their developer tools.

Testing Different Devices

You may also want to do testing on multiple different devices, as different browsers render differently depending on the machine on which it is being displayed. These should include mobile devices, tablets, laptops, desktops, and potentially some television browsers.

Another variant is testing different viewport sizes. For instance a 1366 x 768 resolution, common among some laptops will look different than 1024 x 768, or 800x600 which was common on old small monitors. You can consider testing a set of breakpoints (e.g. tablets, phones) to test how responsive your app is, on different devices, and provide mobile and desktop presentation modes for your app if the way it's rendering can't be resolved.


As with all testing techniques, particularly if you are taking a shift-left approach to application testing, you will want to apply cross-browser testing as soon as possible. This means during development, when the application is complete, and even after it has been released.


It’s wise to get all functionality working as best as possible on all browsers during the development phase to give your team the best chance of saving time and resources in the long run. mabl makes this easy with a low-code test creation process, built-in testing cloud scalability, and unlimited parallel test runs that accelerate the overall testing process.


When you get into serious testing phases, consider the following approach:

  • Run tests in a few stable browsers, in three main OS's (Windows, Apple, Linux)
  • Run tests using common phone and tablet browsers
  • Run tests in any other browsers you have on your list
  • Run some lo-fi accessibility testing to ensure that a diverse spectrum of individuals can access your content effectively. For example try navigating with only a keyboard, or with a screen reader.


  • The end-to-end user experience - test all the interconnected pieces of your website - from server-side functionality, as well as client-side code which must run in the browser, API calls, and even external services like email and file downloads
  • GUI – make sure that the interface renders cleanly and clearly in all browsers tested
  • Response to input – make sure that inputs effectively respond the same in all browsers, and if not, create an alternate method for access to important pieces of functionality
  • Performance – you will want to make sure that the site will load in a reasonable amount of time on all the devices and browsers that are important to your users

Note that not all things that are problematic in different browsers are necessarily “errors” but are just display issues. For this reason you may want to consider establishing clear benchmarks and goals for each of your test plans.

Manual Cross Browser Testing

Manual testing is what it sounds like. For manual testing, the team should identify which browsers the application should support. Testers will then need to manually run through each of the same test cases in each environment and make notes on how the application responds in each browser and report any bugs that come up along the way. Much of this testing can be done either by oneself, or by user groups, possibly in conjunction with usability testing.

The flaws in manual testing are self-evident; it’s not possible to cover each and every browser this way, upon every code change, and it might be difficult to gain access to devices for each test case. For example, mobile phones have such a wide variability that it is difficult to ensure that content will render properly on every device. It’s also worth mentioning that manual testing can be both costly and time consuming, but there are some situations where manual testing is the best approach. Many teams have implemented automated tests and manual testing together to ensure the highest quality experience for their users.

Automating Cross Browser Testing

Taking an automated approach to remove many of the tedious tasks of testing is far more ideal. It typically involves using software such as mabl, which contains the test infrastructure (cross-browser) to run an unlimited number of tests in parallel (it's fast), and it easily plugs into both developer workflow and QA workflows (easy to use). mabl also provides screenshots from your app at every step of every test so that it's easy to verify what the app looks like on any viewport and browser you are testing on.

While not every aspect of cross-browser testing makes sense to be handled with automation, particularly for some of the more obscure browsers and devices, a great deal of time can be saved by using mabl to run through the most popular browsers.

mabl is a low-code test automation solution that makes it easy to integrate cross-browser testing into your development pipeline; you can treat cross-browser testing as part of your normal regression testing processes, and eventually every other aspect of testing you perform, ranging from system, end-to-end, visual testing and more. With mabl’s low-code test automation platform you can easily implement best practices for cross-browser testing.

See for yourself how mabl can help you easily create and run reliable, cross-browser tests at every step of your development lifecycle. Sign up for a free trial today!