How SUSE Matured QA with Qase

Inside Brandon DePesa’s journey from startup spreadsheets to unified, data-driven quality at SUSE 

SUSE is one of the most recognizable names in computing, from open source to enterprise. In its cloud-native business, Rancher Prime is the centerpiece: a platform for Kubernetes cluster provisioning and management.

For many customers, their first experience with SUSE begins not with a purchase order, but with open-source experimentation. 

“We’ve found that a lot of our customers start as open-source users,” explains Brandon, Director of Quality Engineering for SUSE’s cloud-native group. “They might be playing with Rancher at home or using it for a POC. We need something that works and works well, so they want to keep using it and recommending it.”

For Mike Latimer, Senior Director of Engineering overseeing QA, security, project management, and infrastructure, it boils down to trust:

“We as a business rely on trust between us and our customers. At the core of that trust is a reliable, functional product.”

That trust has to hold across open-source users and paying enterprise customers with complex hybrid environments. Quality is the foundation; QA is not “just another function,” it’s existential.

From three QA engineers to eighty: when startup systems needed to scale

SUSE’s cloud-native business went from scrappy, scattered QA artifacts to a single, data-driven quality hub that leadership and stakeholders across the company can see and trust.

When Rancher was a startup ten years ago, the QA team was tiny.

“When I started, QA was three people including myself,” Brandon recalls.

At that scale, using lightweight tools was fine. Test cases were tracked in spreadsheets, plans and notes were written in Confluence and internal wikis, and some teams were using GitHub issues or source control as makeshift test repositories.

“So even five years ago,” says Brandon, “with ten to fifteen people split across a few teams, it wasn’t ideal, but it wasn’t a huge problem.”

Then Rancher matured into an enterprise product, was acquired by SUSE, and everything changed. Today, Brandon’s QA organization is about eighty people, plus “tons of developers, project managers, and everyone else who is interested in the QA world.” That scale exposed cracks in the old way of working.

Different teams had different tools, different naming conventions, and different places where test plans and cases lived. If you weren’t on a given team, you might have no idea where their critical test artifacts were stored.

“It became very difficult to even understand where test artifacts were located,” Brandon says. “Test plans, test cases, automated tests, documentation—you could only reliably find them if you were on that team.”

For a business unit selling a suite of cloud-native products that have to work together, that was risky. You can’t easily validate cross-product scenarios if you can’t even see what’s being tested where.

Reporting that couldn’t keep up with leadership questions

The fragmentation problem got even worse when leadership wanted to see the big picture.

Questions like: “What’s our overall test coverage?”, “How many test cases do we have across Rancher Prime?”, “What percentage of those are automated?” were nearly impossible to answer at an aggregate level.

“We had no requirements traceability, no requirement coverage,” Brandon says. “At best, we could manually count test cases for each area. Things like automation velocity or manual vs. automated ratios across products, were just impossible to provide.”

To produce an answer, teams had to collect test assets from multiple spreadsheets, wikis, and repos, normalize them into a single format, and manually crunch numbers in yet another spreadsheet.

It was known to be so painful that leadership wouldn’t even ask for deeper QA metrics. Everyone knew the data either didn’t exist or wasn’t worth the effort to assemble.

The search: involving the people who’d use the tool

Brandon didn’t want to pick a platform from a slide deck. He wanted real engineers using real tools.

“You need a level of trust with your team,” he says. “If you just say, ‘Here’s the new tool, use it,’ even the best tool in the world will get resistance.”

So he assembled a group of engineers from multiple QA teams and ran a structured evaluation. They shortlisted four to five tools; a mix of open source and commercial products, including some of the biggest names in the space.

Each team got trial licenses. Engineers imported real test cases, ran test cycles, created defects, and tried to hook up automation. They documented what worked, what didn’t, and what felt painful day-to-day. That feedback loop drove the final recommendation.

“I’ve been in software and quality for almost twenty-five years,” Brandon says. “I still learned new things from the team through this process; especially what was painful for them, and what they actually wanted from a tool.”

Why Qase: the combination that finally checked all the boxes

In the end, Qase stood out not for one feature, but for a combination that matched SUSE’s reality.

UI that felt natural to engineers

“The UI was incredibly easy to use,” Brandon recalls. “Even without formal training or demos, very few people had issues. It’s just intuitive and it works.”

For an eighty-person QA org, this mattered. Every hour spent teaching people how to click around a tool is an hour not spent improving tests.

Flexible imports that made migration simple

One team had 2,500 existing test cases. They assumed migrating that volume would take at least several days. With Qase, they imported everything in a matter of hours.

“A lot of tools are very prescriptive,” Brandon says. “You have to fit their exact format or enter everything manually. Qase’s flexibility around how you import made it super simple to get going.”

That flexibility meant teams could move all their manual test assets into Qase first without derailing sprints.

Automation-friendly from day one

SUSE’s QA teams run a wide range of automated tests, from UI automation to backend and system tests, with different frameworks and reporting formats across teams.

Some teams plugged into Qase’s built-in integrations (like JUnit). Others had custom test result outputs, which they ran through a lightweight script.

Qase also helped close the loop between manual and automated worlds. If a result came in for a test that didn’t exist yet, Qase auto-created the test case. Teams could later link these auto-created cases back to existing manual tests if needed.

Once a few teams had working scripts and patterns, they shared them internally, and rollout to the rest of the org became very fast.

Read-only licenses that enabled real visibility

Cost was a factor for a large enterprise group. The QA team uses around 75 full Qase licenses. But there are as many as 200 other stakeholders—product managers, architects, security engineers, support, and leadership—who need to view test data without authoring or executing tests.

“If we had to pay [full price] for another 200 licenses just so people could view test data, that would have been a no-go from the beginning,” Brandon says.

Qase’s read-only licenses alleviated that burden. Now, anyone in the organization who needs to see QA data can get visibility for little additional budget.

“That visibility for people not working directly with tests was a huge point in Qase’s favor,” Brandon notes.

The impact: from invisible QA to a visible, strategic advantage

Before Qase, leadership’s view of QA was limited and sporadic. Now, they have aggregate dashboards showing QA activity and coverage across all cloud-native products, per-product dashboards for more detailed views, and real-time updates as teams add tests and execute runs.

Early on, there was a strong focus on the number of test cases added over time, and how many of those were automated. Those metrics were simple but powerful, showing continuous improvement and a growing investment in automation.

Over time, Brandon realized an even more important metric: test execution volume.

“We got to a point where the numbers for test cases and automation added were fairly consistent,” he says. “What that didn’t show was the overall level of testing actually happening.”

Now, SUSE’s dashboards clearly show how many tests are run, against how many versions, and how often those runs happen.

A simple example: suppose a team has ten test cases, five of them automated. On paper, that’s unremarkable. But when Qase reveals those tests are being run across three to five different versions, multiple times a week, you start seeing hundreds of executions for those same tests. That tells a much richer story about the team’s real-world effort.

“QA is difficult to measure,” Mike says. “If the product works, it just works. But if you can show you’re improving test coverage and doing a better job today than yesterday, that shows your focus is on quality and you’re not getting left behind.”

QA onboarding time cut from weeks to hours

Before Qase, a QA engineer moving to a new team or helping another product often needed a week or two just to understand where test cases lived and figure out what had been covered and what hadn’t.

Now, Brandon estimates that the same understanding can usually be achieved in a few hours.

With Qase, an engineer can browse or search test plans and suites in one place, filter by component, feature, or configuration, see which versions and environments are in scope, and understand the team’s existing coverage and known gaps.

This reduction in onboarding friction lets the QA org be far more fluid in how it deploys people across teams and initiatives.

Cross-product coverage: testing the suite, not just the pieces

SUSE doesn’t just sell a single product; it sells a suite that must work together: Rancher management, virtualization, observability, security, and more.

Qase has become central to understanding end-to-end coverage across these components.

“We’re able to look at test coverage for, say, Rancher management and our virtualization efforts,” Brandon explains. “We can see that one team has tested X, Y, and Z, but we’re missing A, B, and C on the virtualization side, or that we’ve never tested feature A with feature B enabled at the same time.”

That ability to quickly spot cross-product gaps is crucial for enterprise customers who run SUSE in heterogeneous, high-stakes environments.

QA made visible and interesting to the wider organization

Traditionally, testing is a black box. If things break, customers notice. If they don’t, QA has quietly done its job. Qase changed that dynamic inside SUSE.

“Testing is usually very opaque to anyone not in QA or development,” Brandon says. “The dashboards and metrics open a window into what teams are actually doing.”

Now architects browse Qase to understand test history and outcomes. Product managers dig into runs to reassure customers about specific OS or platform coverage. Project managers use Qase data to plan and track release readiness.

Brandon shares a story about an architect who saw a dashboard full of failing runs and got worried that releases were going out in that state. A quick walkthrough showed that he was looking at cumulative testing across all builds, including early, intentionally unstable ones. Filtering to only final release-candidate runs showed what actually went out to customers: fully passing test suites.

“You could see his confidence increase as we walked through it,” Brandon says. “He started asking deeper questions about how we test and why. Those are great conversations to have.”

And this broader engagement is only possible because read-only access is affordable. There’s far less financial friction to “just log in and poke around.”

Productivity: more time spent testing, less fighting tools

SUSE doesn’t try to measure productivity through micromanagement. No one is counting how many minutes a tester spends writing steps.

But the qualitative signals are strong. Engineers say it’s easier to write test cases and plans in Qase. Creating and assigning runs is simpler and faster than managing spreadsheets. Reporting both manual and automated results is far less tedious.

And quantitative indicators back that up:

“Our test cases and automation have been continuously increasing,” Brandon says. “That can’t happen if we’re getting bogged down by process and manual tools.”

By cutting away the friction of scattered documents and manual compilation, Qase lets Brandon’s org spend more time on what actually matters: finding issues early and ensuring releases are solid.

Conclusion: QA that meets the premium standard of SUSE’s cloud-native offering

SUSE’s cloud-native business operates in one of the most complex, demanding corners of enterprise infrastructure. Kubernetes, hybrid and edge deployments, an ever-growing matrix of providers and platforms—none of that leaves room for imprecise QA.

Brandon’s journey—from being one of three QA engineers at a startup, to leading an 80-person quality org using Qase as its backbone—is a blueprint for QA leaders facing the same transition.

From scattered artifacts to a single source of truth; from ad hoc reporting to real-time executive dashboards; from opaque QA work to a visible, respected discipline.

Qase didn’t create SUSE’s quality culture, but it gave Brandon and his teams the platform they needed to express that culture at enterprise scale, and to show executives, customers, and colleagues exactly what quality looks like in a cloud-native world.

You've successfully subscribed to Qase Blog | Articles about our product, software testing and the QA community.
Great! Next, complete checkout to get full access to all premium content.
Error! Could not sign up. invalid link.
Welcome back! You've successfully signed in.
Error! Could not sign in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.