Why good UX should be a key factor when you’re shopping for a new tool

Most development and testing teams know by now that helping people maintain focus is essential to maintaining high productivity levels. Many teams, however, assume they’ve done as much as they can on this front by minimizing meetings and Slack messages. 

Research shows, however, that task-switching — starting one task and switching to another before finishing the first — is worse than any external interruption. This dynamic is even harsher depending on the task, with this research showing that programming and testing tasks are the most vulnerable to task switching and interruption. 

The researchers recommend minimizing task switching and reducing cognitive load whenever you’re programming or testing. But here’s the issue: While many engineering organizations consider “developer experience” a priority, the same prioritization is not always offered to the testing experience. 

As a result, organizations can overlook the UX of the testing tools they adopt, meaning testing and QA teams can feel constantly interrupted by clumsy, buggy testing tools that all but force them to switch tasks regularly. 

With this risk in mind, testing teams need to reevaluate their testing tools with UX as a top priority. An unintuitive testing tool isn’t just an annoyance; it’s a real threat to your ability to get work done. 

Vendor management is an underrated skill

New test automation frameworks, new testing methodologies, the continuous effort to shift testing left — there’s a lot to keep up with in testing. But most of these topics tend to focus on so-called hard skills. What often goes under-discussed is vendor management — an important but underrated skill that sits right in the middle of hard and soft skills. 

Effective vendor management builds trust with stakeholders

“Build or buy” is one of those problems that will chase QA teams forever. 

Building sometimes means you get a solution built perfectly for your use case and no other (and sometimes it means years of building something you could have bought off the shelf much sooner). 

Buying sometimes means getting access to the best tool built by the best experts (and sometimes, it means getting locked into an expensive contract with a vendor that can’t deliver).

QA and development teams are juggling all these possibilities every time they decide to build, buy, or something in between. Will Larson, CTO at Carta, compares using a vendor to taking on an outstanding debt. “You know you will have to service that debt’s interest over time,” he writes, “And there’s a small chance that they might call the debt due at any point.”

Other risks include the vendor going out of business, suddenly charging much more than they used to, or suffering a security breach. 

All these risks exceed the simple pros and cons list people tend to make when comparing vendors' products. There’s a good chance that the best tool, from a purely technical perspective, isn’t the best if you find out any of these risks are likely to happen. 

The better you can get at sussing out these risks and making careful decisions that pay off (with better tools and/or lower risks), the more you can build trust with stakeholders who would otherwise be wary of your build vs. buy decisions. 

Good tools create positive feedback loops

Bad vendor management can lead to a negative feedback loop as bad tools drag down your productivity, and bad vendors make it difficult to escape their contracts. On the way, you can lose the trust of the leaders above you and the QA engineers below you.

The good news is that the feedback loop operates in the other direction: Good tools enhance productivity, good vendors build trust, and the ability to consistently make good tooling decisions enhances overall engineering effectiveness. 

But there’s an interesting wrinkle — teams tend to assume there’s a ceiling to this work, that you can tweak your tooling and processes, but this work can only go so far. Research says otherwise.

In 2023, Google ran a study focused on observing the effects of improving build latency, and the results surprised them. 

“The expectation was that there would be a clear pattern where, if the build takes less than x seconds, it would mean developers are more likely to stay on task and more likely to return quickly to their task,” the researchers write. “Reality has a nasty habit of dashing our expectations, though.”

The researchers expected a local limit, thinking that improvements to build time would help but that the benefits would eventually reach a point of diminishing returns, as visualized in the diagram below.

In reality, the researchers write, “Every improvement to build latency will help developers stay on task and get back on task faster.” 

This result won’t generalize to every tool and process, but it’s likely directionally accurate: Testing and development teams shouldn’t accept “good enough” and assume they’ve hit a local limit. Instead, they should work to continuously improve their tools and processes — there might be no limit to the potential productivity improvements. 

QA teams need to look beyond the pros and cons

The most common way to compare vendors and tools — which you see reflected on their own comparison pages —  is to weigh technical pros and cons against each other and see which tool has more than the other. 

The problem with this framework is that you can easily lose the forest for the trees and treat each difference as a discrete factor rather than as part of the composite whole. And when you don’t look at the whole, you can miss the quality of the user experience — a factor that isn’t just one pro or con among many but a factor that can make the rest of the tool come together. 

Good UX reduces cognitive load

UX isn’t about looking pretty — it’s about reducing the effort required to use the tool so users can use it more easily and to much greater effect. 

Technical teams filled with testers and developers are, of course, likely better positioned than anyone else to make a hard-to-use tool work, given their expertise, but if you’re buying a tool, why should you have to bend it to your whims just to make it work? 

It’s possible, but that doesn’t mean it’s a worthy way to spend time. Part of the problem is that there are often multiple issues hiding behind the feeling that a tool isn’t intuitive. 

In a 2019 study, for example, researchers tried to identify the biggest drivers of cognitive load for software developers. Information and work processes were two out of three categories, but tools were the third. Within this category, there were numerous subcategories, including:

  • Lack of functionality
  • Lack of stability
  • Doesn’t integrate
  • Delays
  • Unintuitive or hard to use 

Testing and development teams would likely ditch a tool that had all these issues, but they might not realize that most or all of their tools have one or some of these issues. 

Once you weave all your tools together, small issues can accumulate, and you can end up with a software development lifecycle (SDLC) that drives cognitive load, distraction, and stress instead of one that relieves you of strain and helps you work. Bad UX is often an emergent property of your entire toolchain. 

Good UX is easy to underrate

Despite the risk of bad UX described above, it’s surprisingly easy to underrate it. There are a few reasons for this, but many of them emerge from what psychologists call the salience bias, a common cognitive bias that affects decision-making. 

When we make decisions, we tend to think we’re weighing all the factors and context, but more often than not, we’re examining a narrow selection of the full data without realizing it. 

When consumers buy a car, for example, the most salient features will be the most obvious ones – including the appearance, price, and basic functionality. They might not think as carefully about the fuel economy, however, because that factor is a little more abstract and a little less salient. But, after six months of expensive refueling, they might be fuming that they didn’t think about it. 

In the same way, QA teams can end up focused on the technical pros and cons of a testing tool instead of how the holistic experience comes together for daily use. But, months later, when your team isn’t using a feature that appeared on the pros and cons list because it’s hidden beneath a dozen menus, you might be frustrated you didn’t think more about UX. 

The hidden costs of bad UX

To make matters worse, the costs of UX tend to be hidden and tend to accumulate almost invisibly over time. Testing is hard, after all, and it’s easy to absorb the effort of using a bad tool as “part of the job.” But a great team can step back and see where the friction is coming from and refuse the costs of bad UX. 

Steep learning curves can intimidate new QA engineers

Many testing tools are complex, but some do little to make that complexity approachable. The cost of this difficulty is especially likely to go unnoticed if experienced QA engineers adopt the tool, but new and junior QA engineers are the primary users.

As Nicco Hagedorn, Head of Software QA at Elgato, says, “Often, you start a new job, and you’re so overwhelmed by all the tools that you feel like you need a Doctorate just to navigate them.” 

For new QA engineers, little points of friction can accumulate because they’re learning all the tools at once. Onboarding can then become a slog, and experienced team members can spend a lot of time getting new team members up to speed. 

Bad design limits feature potential

When you’re shopping for a testing tool, it’s easy to be impressed by the features but even easier to underthink how easy or difficult it is to access and use those features. A bad UX can effectively hide features, limiting the potential of the tool you’re buying. 

As Anika Danner, Director of Quality Assurance Bit.ly, says, reflecting on their previous test management system, “The team was constantly frustrated with our previous TMS, poor user experience and outdated UI, so they did not invest in using it…Navigating and managing test cases was cumbersome… Our TMS functioned more as a storage repository for test scenarios rather than an actively used tool.”

This is the end result of a shockingly large amount of tools: You buy them for one ambitious, valuable purpose, but they end up becoming mere shadows of what they were supposed to be.

Bad UX inhibits flow state

Flow state is a state of focus where creativity is at its highest level, and distraction is at its lowest. For many developers and QA engineers, getting to flow state is the primary goal of any given workday, and for many engineering teams, enabling the passage to flow state is the primary goal of the entire SDLC toolchain. 

Unfortunately, there are many barriers that inhibit focus. 

In a 2024 study, researchers identified 21 barriers to flow and grouped those barriers into personal, interpersonal, and situational subcategories. 

Developers, QA engineers, and testing teams sometimes assume interpersonal barriers, such as cross-team collaboration, block their focus the most, but this study found that situational barriers were the worst. “Specifically,” the researchers write, “interruptions and distractions, time-related challenges, and negative UX were considered flow-preventing.”

The first two are fairly obvious: Interruptions and distractions are clear blockers, as is having too little time. But negative UX is a little less obvious.

The researchers write, “Having negative UX with the tools was reported to shift attention from the task at hand to solving other secondary problems.” When developers and QA engineers encounter tools with a bad UX, they either shift to entirely different secondary tasks or treat the tool itself as a secondary task they have to puzzle through. Either way, the flow state is broken. 

The researchers also write, “In light of these findings, it can be inferred that UX can not only increase the intrinsic motivation to use IT and enable more flow, fun, and immersion but also work as a barrier to experiencing flow.” In other words, as we covered, good tools can boost the positive feedback loop that makes development and testing more fun and more productive. 

A good craftsperson (sometimes) blames their tools

There’s an old adage that says, “A good craftsman never blames his tools.” The point of the adage is that a good craftsperson doesn’t deflect blame but instead works to improve and stay accountable. 

Outside the point the adage is making, however, the idea doesn’t always make sense. If a carpenter swings a hammer and the handle breaks, they can and should blame their tools. The carpenter would finish their work faster by blaming the tool, getting another one, and trying again. 

Javier Casas, a software engineer, deconstructs this adage in several ways, eventually arriving at a new version that states, “A good craftsman is good because he searches for the best tools.”

The best teams search for the best tools and identify when the functionality is lacking, or the UX is limiting. If the tool comes up short, they know when to blame the tool and when to blame how they’re using it. More importantly, they take their tools seriously and know when to try a new one.