For the fourth QA myth-busting article, we’re bringing back the original duo: Qase Developer Advocate Vitaly Sharovatov and Qase Director of Content Rease Rios.
Today, we’re addressing the myth that quality is the testers’ responsibility.
First, let’s establish that quality is a match between what’s desired and what’s produced.
Multiple factors affect a product's quality, most of which are outside the testers' control. Hence, testers cannot be held solely responsible for quality.
Think of it another way — Rease is the Director of Content at Qase, does that mean that the quality of the content Qase produces is solely her responsibility? Like many testers, Rease may be the final check before content hits production, but she depends on her teammates to be accountable as well. If Rease asks Vitaly to write an article about pair programming and he turns in a draft about encountering a strange bug, we have a mismatch of what was desired and what was produced. Can Rease be held responsible for that?
Whether we’re building an app or a content calendar, it’s important to consider all the factors that influence the quality of the end result.
Quality depends on defining the right customer pain point to address
In IT, delivering a high-quality product means producing software that satisfies users’ desires and helps them solve problems. Before production even starts, we must identify and define the pain points in order to address a specific problem. If we aren’t addressing the right problem, the software will not have value, and the quality will be zero.
Imagine a company that builds a Test Management System (TMS). Executives, product managers, or marketers decide that testers need a gamification feature within the TMS — with points, badges, and leaderboards for completed test cases. They believe this will motivate testers and increase productivity.
However, most testers find this feature unnecessary and even distracting, as they prefer tools that streamline their workflow rather than add superficial incentives. Spending resources on planning and developing this functionality is misguided. Yet, in many cases, executives and product managers provoke feature creep by introducing solutions to problems that users don’t have or that aren’t important. The quality of features that do not help users is close to zero.
Let’s imagine Vitaly has an idea for an article. He spends two weeks researching the topic and writing the draft — never communicating with Rease about his plans. When he proudly presents his draft, Rease has to inform him that the topic isn’t a good fit for the current campaigns and that it speaks to the wrong audience. If Rease is expected to simply edit the article for grammar mistakes (find bugs) and publish it — knowing that it will not appeal to the target audience — is she the only one to blame if the article doesn’t perform or resonate with the right people?
The gamification feature may be well-designed, and the article may be well-written, but neither addresses an actual problem.
Testers are rarely invited to participate in deciding which problem to solve, so if the wrong problem is chosen, the quality will be zero regardless of the testers’ efforts.
Quality heavily depends on finding an optimal way to solve the defined problem
Continuing with the company building a TMS example, let’s say the team identifies a real problem with flaky automated tests. A product manager then comes up with what they believe is an optimal solution: a feature in the TMS that automatically turns flaky tests off.
While the “solution” targets a real problem, it causes more problems rather than solving the original one. Turning off the flaky tests doesn’t change the fact that they are flaky and must be investigated. If they aren’t investigated, the quality of the product will likely go down. Is this decrease in quality directly tied to testers and only testers? Certainly not.
Now imagine Rease discovers the URL to a popular landing page has changed. Multiple website pages link to the old URL, so site visitors land on a 404 page. A web developer decides that the solution is to remove all the links to this old URL. While this technically addresses the problem of 404 pages, it’s not a good solution because it leaves site visitors stranded without a link to the proper page. A better solution would be to use a 301 redirect.
In both situations, the people who are being held accountable for quality were not consulted about the solution. Testers are rarely invited to discussions about which solutions will be developed, so if the chosen solution is far from optimal, the quality will be zero regardless of the testers’ efforts.
Quality is influenced by work processes
In many Scrum implementations, testing is done late in the sprint after development is complete. The time pressure on testers is so intense that they rarely have time to do their work well. This often leads to people working overtime and burning out or simply skipping some parts of testing.
Work processes are usually designed by managers. Developers may have some input, but testers are very rarely invited to participate. If testers are expected to rush their work to accommodate poorly designed processes, how could they be held accountable for the quality of the product?
Let’s say Vitaly knows the submission deadline for an important QA conference is approaching. Hours before the deadline, Vitaly asks Rease to do a comprehensive edit. At this point, Rease has no choice but to rush her work. With the time crunch, Rease knows she can’t ask Vitaly to do major revisions, so she can only focus on minor adjustments. If Vitaly’s submission is subpar, is Rease the only one to blame?
When work processes are poorly designed, everyone is set up to fail, and quality decreases as a result.
Quality heavily depends on the product architecture and infrastructure
The quality of a software product also depends on the underlying architecture and the stability of the infrastructure. If these are flawed, testers face significant challenges that can lead to lower product quality.
Imagine a testing environment that isn’t regularly updated with the latest code changes, data, or configurations. Testers are forced to work with outdated information, making it difficult to identify bugs that only appear under current conditions. As a result, issues that could have been caught early slip through and reach the end-users.
Alternatively, imagine an architecture so tightly interconnected that it’s impossible to test individual components in isolation, where every module depends on several others, so testing one feature requires the entire system to be up and running. Or, writing automated tests is really cumbersome because each test must set up a complex environment, including database connections and network configurations.
In these scenarios, writing and executing tests takes much longer, reducing the number of tests that can be performed within the time pressure of the deadlines. Testing inefficiencies and constraints mean that defects are more likely to go unnoticed until after release. These issues come from architectural and infrastructural problems well beyond the testers’ control.
Consider an environment that does not support cloud applications like Google Docs. Vitaly writes his article in a Word document saved to his computer and then emails the draft to Rease. Rease must download the file, edit the local version on her computer, save the edited file, and then email it back to Vitaly. They repeat this process for every round of revisions. If they want to include something from a previous version, they’ll have to dig through several files to find what they need. Working in a single document stored on a shared drive that includes version history would make more sense, but in this example scenario, the choice is outside of her control.
Architecture and infrastructure decisions are mostly made by managers, architects, senior developers, and system admins. Testers are rarely invited to participate.
Quality heavily depends on developers’ programming skills
If developers are inexperienced and produce buggy code, testers must invest extra time in testing. Overloading testers, particularly when they are pressed for time, can result in lower-quality products.
Inexperienced software developers will struggle to produce stable, scalable, and testable code, making testers’ jobs that much harder. Testers rarely influence developer hiring decisions or technical training programs, so they have no opportunity to influence this aspect of quality.
Now imagine that Rease is asked to produce an ebook on a complex software development topic but not given the chance to select the writer. The person selected for the job has no writing experience at all and has worked as a developer for less than one year.
In both cases, the level of quality is hindered from the start because the person creating the code or content lacks the skills to do it well.
Quality depends on everyone’s motivation to support it
Everyone involved in building the product needs to be motivated to create something of quality. Unfortunately, product managers, architects, developers, designers, and testers are often motivated by factors other than quality results. It’s not that they don’t care about quality; it’s that they aren’t encouraged to prioritize it.
For example, if developers have a KPI to release ten features per month, and product managers have KPIs focused on increasing revenue, they’ll be more motivated to hit those KPIs to keep their jobs or get promoted. Similarly, if testers have a KPI to find more bugs, they’ll be motivated to log as many bugs as possible. Putting KPIs first shifts the motivation away from achieving quality as a team and towards hitting individual goals.
Now imagine that Vitaly has a KPI of writing ten articles per month. Rease did not ask for ten articles. She asked for three well-researched, in-depth articles that align with the content strategy she created. Quality is her motivator, but Vitaly’s performance review is coming up, so he’s more motivated by hitting his goal of ten articles. If Rease and Vitaly are motivated by different things, the quality of the final product will inevitably suffer.
Quality is the sum of many parts
With so many factors influencing quality, it’s illogical to hold testers solely responsible for the quality of a product. And yet, CEOs throw QAs under the bus when they get criticized for issues with their product, testers are blamed for lawsuits related to systems and processes they don’t control, and QA teams suffer criticism when budget restraints that limit testing lead to massive outages.
We cannot test quality into a product, just as an editor cannot edit quality into a piece of writing. Everyone involved in creating something is responsible for quality. It’s time we stop arguing about whether or not the testers and QA teams of the world could have prevented various issues and start focusing on how teams can positively influence all the many factors that impact quality.