Cross-browser testing: Why it's important and how to do it efficiently

A comprehensive look at cross-browser testing, including what it covers, why its important, and best practices.

Quick question: Can you name all the browsers? I’m sure the major ones come right to mind: Chrome, Firefox, Safari — maybe Edge. Given another second, you might think of Opera, Brave, Arc, and a couple of others. 

If you’re brainstorming to help a friend pick their next browser, you’ve done a good job. Unfortunately, if you’re testing your software to ensure compatibility, you’re way off. 

In the context of cross-browser testing, this level of granularity just isn’t enough. Not only are there many different browser types, but every web browser has many different versions, and each functions differently across different devices. Google Chrome alone has had over 188 versions

In an ideal world, your website would work across every single variation, and every user — regardless of how unique their device, browser, and OS combinations — would be able to access it. But in reality, this just isn’t practical. When we talk about cross-browser testing, we always have two questions in play: How should we test? And should we test at all? 

What is cross-browser testing?

Cross-browser testing refers, as the name implies, to the process of testing how well a website functions and appears across different browsers (and, with the rise of non-desktop devices, various devices). 

As in other forms of testing, the methods of performing cross-browser testing tend to lie somewhere along the spectrum between mostly manual and mostly automated. 

Manual cross-browser testing is surprisingly intuitive for those who haven’t done it. If you’ve built a website with Chrome in mind, you can open it in Firefox and click around to see how well it works. 

Sometimes, this can be a little finicky. In some cases, you might need to create a virtual machine. In others, you might be able to choose whether you can get away with an emulator or whether you need to load the website on an actual device (and getting your hands on an old Android phone isn’t always easy). 

Sometimes, manual testing can be enough if the website is simple and you’re familiar with the most likely pain points you need to check. However, manual testing is not scalable or efficient past a certain threshold. 

At a certain scale, automated testing becomes necessary. If your website is being accessed by people across the world, for example, you might need to test your site across more devices, older devices, various operating systems, different screen sizes, and browsers that are less popular in your home country. If you’re pushing updates regularly, especially in a CI/CD environment, automated testing will be more important in order to make sure each new update doesn’t break the website for other users. 

That said, despite the importance of cross-browser testing, strict prioritization decisions always need to be made. Testing isn’t zero-effort, so testers need to know when covering an old browser isn’t worthwhile or when allowing a slightly dysfunctional version out on a limited number of devices, is a worthwhile tradeoff. 

Note that, given the importance of cross-browser compatibility, many teams opt to use Javascript frameworks like ReactJS, Vue.js, and JQuery, as well as UI frameworks like Bootstrap and Tailwind, because these frameworks either already support a wide range of browsers or offer native testing capabilities. 

What does cross-browser testing cover?

Cross-browser testing generally covers four broad categories: 

  • Base functionality
  • Accessibility
  • Responsiveness
  • Design

Across these four categories, developers and testers are trying to ensure that when most users access the website, they can reliably and consistently get the experience you intended.

Base functionality

This category is the foundation of your website: Is every base feature necessary to actually use the website working across the browsers you're testing? In this category, you’ll test things like:

  • Are all the dialogue boxes working as designed?
  • Are all form fields accepting valid inputs?
  • Is the website handling cookies correctly?
  • If it’s optimized for mobile devices (and it should be), does the website work via touch input as well as click input?

Testing these functionalities is foundational. If users encounter errors with form fields, for example, they won’t be generous enough (or knowledgeable enough) to assume you were lacking time to test. They’ll assume your website is broken. 

Accessibility

Accessibility, over the years, has shifted from a second feature to one that needs to be prioritized and tested in a website’s earliest releases. And for good reason: According to the CDC, one in four Americans have a disability, and, as the A11y Project writes, “For many, technology built with accessibility in mind makes things easier. For people with disabilities, technology makes things possible.”

By building and testing accessibility features across browsers and devices, testers can ensure their websites work well with assistive tools such as screen readers and magnifiers. With an accessibility mindset, testers can also take a closer look at their websites to see whether the structure and appearance accommodate people with colorblindness, dyslexia, and seizure triggers.

For a baseline, look to comply with the Web Content Accessibility Guidelines (WCAG).

Responsiveness

Though you might have intended your design for one browser over another, it still needs to have some baseline responsiveness across devices. No matter how the user shrinks or expands the website across screens or rotates it on different devices, the website still needs to work, and the design still needs to be understandable. 

Even if you’ve built a mobile-first payment website, for example, users who access it on their smartphone nine times out of ten will still expect to be able to access it on their desktops on rare occasions when they want to audit their spending. In cases like these, no matter how rarely they want to access it via another device, people will expect a comparable user experience (UX) regardless of the device they use to access the website.

Design

The three previous categories all touched on UX and the core functionalities of actually using the website. But the design is no less important. The functionality is a precondition but the design working well and appearing correctly is what will actually convert potential users and build brand recognition. 

By the time you’re doing cross-browser testing, you’ll likely already have done a design handoff between the designers and developers. But in cross-browser testing, all that collaboration can turn to frustration if the agreed-upon design mutates in the wrong browser. 

Here, you’ll want to make sure all the fonts, images, and layout choices match the design specifications — both to create branding consistency and to ensure every user gets the intended experience. As a bonus, you’ll also keep the design team happy so that they don’t make memes about you. 

Nice looking burger "The design in my head", plain-looking burger "The design in Figma", smashed and messy burger "the design in production"

Why is cross-browser testing important?

Cross-browser testing has an obvious value: If you’ve built a website, you want as many people as possible to be able to use it. But a shallow look at that value makes it easy to skimp on cross-browser testing when timelines get tight. 

Without a deeper understanding of the tradeoffs, it’s tempting to see relatively few potential users with this or that browser and discount them. Talk to experienced testers, however, and you’ll see a feedback loop that connects seemingly disparate costs — all of which together make thorough cross-browser testing essential. 

The first reason cross-browser testing is important — broadening your user base — is often underrated despite its obviousness. A first glance at browser market share, for example, might make you think it’s okay to not test against Microsoft Edge. 

Statcounter graph showing Chrome, Safari, and Edge as top browsers, large gaps in between each

Further research shows, however, that Edge has been growing in popularity and is, according to some stats, beating out Safari and Firefox. Testing against Edge now might prepare you if its popularity keeps growing.

Similarly, the incredible dominance of Chrome means you need to get granular about which versions of Chrome you’re testing against. From March 2023 to March 2024, for example, Chrome 114 and 115 were more popular than Safari — even though Chrome 116 was already out. 

StatCounter graph showing usage of various versions of Chrome

With this in mind, broadening your user base also means deepening your user base. Many more users than you might think are hiding behind older versions of familiar browsers.

Making your website available to as many users as possible isn’t, however, just a sheer numbers game. The more users that can access your website, the more brand advocates you can potentially generate (and brand detractors you can potentially avoid). If your website doesn’t work across the various browsers and devices your users use, no marketing campaign will work on them. 

Similarly, your website's functionality across browsers affects a web of other things. If your marketing and outreach efforts depend on SEO, for example, a website that doesn’t work on major browsers and browser versions might face search penalties and struggle to rank.

The more you look, the more a feedback loop starts to form: With a well-tested website, you can reach more users and make your growth efforts more effective, allowing for wider adoption. But the feedback loop can spin in the opposite direction, too: If your website doesn’t work for many users, your site will fail to reach them and those that find it will struggle to use it and be unlikely to return. 

3 best practices for cross-browser testing

Cross-browser testing best practices are as much about prioritization as execution. A perfectly executed series of tests that isn’t worth the time it took is not a good strategy, nor is a strategy using slow but well-targeted tests. 

1. Carefully prioritize how much you can test

There’s only so much time in the world (and a lot of browser versions) so you need to figure out what resources you have and how much you should spend on cross-browser testing early into the testing process.

There are a few heuristics you can use to inform this decision:

  • If your website has been live for a while, look at Google Analytics to see which browsers your users already tend to use. Rule of thumb: If a browser has over 5% of the share of your traffic, it’s worth testing. 
  • Work with your engineering team, product owners, and marketing teams to measure business impact. Sometimes, a relatively small number of users can pose a disproportionate drag on marketing and expansion efforts if they’re not properly included. 
  • You’ll typically want to use a combination of emulators and real devices, with the former being easier to test on and the latter being more thorough. For popular browsers and devices, check with real browsers and devices; for less popular browsers and devices, use emulators. 
  • Build an ongoing testing plan with different targets. For popular browsers and devices, test compatibility with every release; for less popular ones, only test for major changes. 

With these heuristics, you can dramatically reduce your potential workload while ensuring you’re still covering most of your users. 

2. Code defensively

Defensive coding is a programming philosophy that asks developers and testers to assume the worst possible inputs. This posture can help avoid many of the worst browser compatibility issues. 

For example, if you’re coding from a defensive position, progressive enhancement will be a clear design choice. With progressive enhancement, you build a website such that the core content is loaded first, ensuring that if other layers break, users still get the basics.

You might also track and prioritize frequent offenders in your early design and testing. For example, Internet Explorer has long posed compatibility issues, but even though support for Internet Explorer ended in 2022, 1 in 200 web users still use it as of March 2024. 

Along similar lines, don’t assume that even major browsers will necessarily stay updated with each other. For example, Chrome started supporting the gap property in Flexbox layouts in version 84 on July 16, 2020, but Safari didn’t support it for another year

3. Combine automated and manual testing methods

Manual cross-browser testing is often sufficient for simple websites from new companies, but past a certain scale, automated testing methods are more practical. That said, it’s rarely best to be black-and-white: The best testing strategies combine both methods.

Automated testing is at its best when you automate repetitive tasks, such as regression tests. It also becomes more useful the more complex your website is — extensive testing can be laborious for manual testers, and the error-prone nature of manual work will inevitably result in many mistakes. 

Qase, for example, offers a feature called Shared steps. This allows testers to create a single step and share it across multiple test cases. Writing one test case then creates a multiplicative effect that saves time and effort. 

Manual testing still has its advantages. It’s often more flexible because testers can focus on very specific scenarios and do exploratory testing to identify new issues. Manual testing, especially when involving real devices, also reflects real-world environments in a way automated testing rarely can. 

A combination is often best because testers can use automated testing to test the majority of cases, which tend to be repetitive or complex, and use manual testing to perform more exploratory testing around specific edge cases or quirks unique to your website. 

A detail that deserves attention 

For many companies, cross-browser testing can feel like a small detail, a process that one can skip as long as they check a few common things. Nowadays, browsers are largely compatible, and cross-browser compatibility doesn’t make or break the way it used to be. 

Back in the late 1990s, for example, the two then-dominant browsers — Internet Explorer 4.0 and Netscape 4.0 — posed serious compatibility issues. In one article, a writer runs through a list of issues, including different implementations of dynamic HTML and push technology. 

Eventually, the author writes, “The consumer electronics arena settled the Beta vs. VHS debate long ago, but those of us on the Internet will have to suffer through yet another battle of incompatible technologies, leading to Web pages that work in only one browser or the other.” 

So, yes, compatibility issues are much better than they used to be. But if your website is operating (or hoping to operate) at any substantial scale, substantial cross-browser testing is still important. 

The more users with better access you can get on the website, the stronger every other function of the company will be. The same principle applies in reverse: If your website breaks for some users, no product functionality or marketing message will work for them. 

You've successfully subscribed to Qase Blog
Great! Next, complete checkout to get full access to all premium content.
Error! Could not sign up. invalid link.
Welcome back! You've successfully signed in.
Error! Could not sign in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.