What is software testing, really?
I recently had another interesting conversation about testing. I’ve had this discussion many times throughout my career: What is software testing? Or perhaps more specifically, what isn’t testing?
Typically, these conversations start with someone stating something that limits the bounds of testing in a way that I either don’t understand or don’t agree with. After twenty-plus years, my view has not shifted much. This article is not meant to be the universal truth. However, everything is based on what I have learned and experienced in a couple of decades working in the software development industry.
The definition and misinterpretation of software testing
If you look up the term software testing, you might get definitions that look something along these lines:
“Software testing is a way to assess the quality of the software and to reduce the risk of software failure in operation” - ISTQB Certified Tester Foundation Level
“Software testing is the process of evaluating and verifying that a software product or application does what it’s supposed to do” - IBM
“Software testing is the act of checking whether software satisfies expectations.” - Wikipedia
All of these can be interpreted very narrowly or broadly. In my experience, many people (especially those outside of testing communities) interpret it as running software, performing some type of task, and analyzing the results.
I believe that software testing is a lot more than that and that people interpret the traditional definitions more narrowly than the definitions intended.
The terms “shifting left,” and “test early, test often” have been around since Larry Smith first coined them in 2001. But we can go further than that. In his 1981 book Software Engineering Economics, Barry Boehm stated that finding bugs is exponentially cheaper the sooner we find them, with requirements being the starting point and operation (production) at the end.
Yet I still see testers being limited, and limiting themselves, to verifying and validating code once it has been written. Even though software development methods have seen testing as an important part of the whole development cycle since at least the 70s, we still seem to struggle to build that idea into our real life.
And while I argue the definitions never limited it, our development processes certainly have. From the Waterfall Model to the DevOps Loop, they all show testing as an isolated step even when they talk about it as an important aspect throughout the process.
Testing is part of the entire SDLC, but we don’t actually teach it
I studied software development at University — not engineering, but software development. I spent three years learning things like human-computer interaction, software development methodology, database design, and programming. I can’t recall a single course talking about tests or testing. It was assumed you took responsibility that your code worked, sure, but we didn’t talk about what that actually meant or explore how to do it.
When I got my first job as a developer, I was thrown into a small project group building a niche system for a particular customer segment. In that project, we had the following roles: developers. That was it. We had one person who acted as the liaison towards the customers, but we were expected to do everything: figure out requirements, design the system, build the whole stack from the database up to the graphical user interface, test, train our users, build the installation package, and then support and maintain the system in production.
I can’t say we knew how to do any of those things, but I do think it built a solid base for my respect for different parts of the development lifecycle and experts in any parts of it. To me: they are all connected, they all rely on each other, and they are not fully separate from each other.
Software testing is not a step; it’s a continuous process
After about a decade, I made a shift to software testing. This is when I first came into contact with formal training or even the understanding that this was something you could actually learn how to do. I found my passion in testing at a point in my career when writing code had started to feel less interesting. I knew how to write the code I needed, but the work felt mechanical and repetitive.
Testing presented me with the intellectual challenge I was missing. I dove into all of the literature I could find. My view of software testing took a new form with more definition and more structure to it while still being heavily influenced by my experience up to this point. I learned to put names to things I was already doing like boundary value analysis and equivalence partitioning. I learned to take a structured approach to risk management and articulate my reasoning instead of just trusting my intuition. I learned to ask deeper questions and uncover gaps and misconceptions instead of assuming my interpretation was universal. I was already writing scripts to check I got the intended results before releasing a feature, manipulating database values to verify my error handling was solid, and manually exploring the feature to try and find scenarios I hadn’t thought of. But I learned how to do it with intent.
While the agile movement started earlier than this, the common way of working where I was at this time was still something of an “iterative waterfall approach.” You went through a set number of phases, from requirement analysis, system design, coding, testing, and repeating until you had something worth releasing to customers. Releases were expensive, and customers were used to having to wait.
The books and training I consumed talked about the V-model, a way of developing software where verification and validation happened over and over again across the whole cycle.
This matched my existing view very well: Testing is a continuous movement starting at the beginning of a cycle and continuing throughout it, just against different artifacts. The books I devoured talked about static testing methods just as much as dynamic methods, which means I’ve always seen challenging requirements and design to be as much a part of testing as writing and executing tests or digging through logs and analytic tools to find bugs in production systems.
Dan Ashby’s Continuous Quality In DevOps from a few years back is a good representation of this:
Ask 3 testers what software testing is, and you’ll get 5 different answers
I was once asked to work with a group of testers in a particular domain. The organization felt the output was not improving at the scale it should, given the investment poured into it. I had a hypothesis but tried to keep an open mind. We sat down in a workshop, and I asked them three questions:
- What things connected to testing happen in between a feature being added to the backlog and it being in production?
- What is your role?
- Which of these tasks are your responsibility?
The result was very interesting. Starting with the first question, the group had a ton of things they added to the timeline, well balanced throughout the lifecycle of a feature. There was clearly no lack of work at any point in time. However, they had all mentioned that their workload was unevenly distributed — high spikes at certain times with little to do at other times.
As for roles, half of the group said they were Test Automation Engineers and their responsibility was keeping the existing automation suites up to date and adding automation for new features. Most of the others said they were (manual) testers, and their responsibilities were to test new features, fix bugs, and do regression testing before a release.
One person said they were a tester and they did testing. This person was the only one who insisted all of the tasks on the board were things they felt responsible for. So, even among a group of testing professionals — who I personally know were well aware of the team responsibility for quality approach — knew the importance of testing early and testing often. Only one person actually lived it. If we, as a community, struggle to live it, how can we expect the rest of the industry to understand it?
QA and testing are different things, but not the way we use them today
It is becoming clear to me that the IT industry in general, my peers in engineering management, and testers themselves tend to limit the idea of testing to “just testing” and use the term Quality Assurance (QA) to represent everything else. I see this in conference talks, articles, titles, literature, and conversations.
I disagree with this way of splitting the terms. QA looks at the quality and efficiency of processes, ways of working, and organization throughout the development process. QA focuses on how to build any software and ensure high quality, always looking to apply learnings to future projects. Testing, on the other hand, focuses on the specific software and related artifacts being tested right now. They complement each other, and they are both valuable.
QA activities could be:
- Defining a way of working that makes sure every development team and project includes early testing
- Defining expectations of different roles in relation to the end quality
- Defining quality metrics to adhere to and how to follow up on them
- Putting a risk management process in place
- Adjusting existing processes based on learning from different projects and initiatives
Testing activities could be:
- Reviewing and critiquing requirements, acceptance criteria, or specifications
- Reviewing and challenging system designs, architectural designs, and service designs
- Reviewing code, ocularly and/or through different tools
- Running the software and executing tests on any level and in any shape or form
- Debugging, looking at metrics and logs
- Creating a risk list for a project or initiative
We all want to grow and feel important
I understand why people try to find other titles. We try to expand past the view of what testing is and, therefore, what testers do. If you see testing as just executing some scenarios on running code, you also limit the value organizations and other roles we collaborate with see in us. We try to break out of that limitation to show how much more value we can bring. This devaluation is shown in salaries, career opportunities, and in the way testers are pushed out of organizations or into other roles.
Some of us get frustrated with fighting to make others see testing as the continuous and highly complex craft that it is. Some never agreed with me or stopped and tried to find better words to define them. Some tried shifting the term Quality Assurance to Quality Assistance, which I would argue has not increased the value others see in us. Others have come up with new terms like Quality Engineering. I can respect this, and I hope to see this have a positive effect on the view of testers. To me, it is still “just testing.”
But even if you don’t agree with me on what software testing is, I hope you agree that what testing is and what you do as a tester are rarely a 1:1 match. Developer roles can (and often do) include more than tasks related to developing software. Testing roles are no different.
We don’t have to restrict ourselves to tasks and skills that are at the core of our role. In fact, it is expected and encouraged to expand outside of that. Most testers I know get really good at working on other, indirectly related, tasks. The skills we build as testers are useful for a broad range of tasks that touch upon other roles and responsibilities.
And just as logically, other roles should include and take responsibility for activities that fall under the testing activities umbrella.
Testing is both a role and an act
Testing, the act, and testing, the role, are different things. A tester can do things that arguably are not testing, and it can be incredibly valuable that they do. A developer, a customer support agent, or a CEO can do things that are arguably testing. They should. The closer their role is to building software, the more involved they should be in testing. I wish others were encouraged to get involved and learn more about the art of testing.
But I believe we struggle to make that distinction — “we,” meaning both software development and testing professionals. And by connecting the limitations we feel are put on us testers (the noun) to testing (the verb), I believe we are doing ourselves a disservice. By rebranding to something other than testers, to be seen as something more and be allowed to use our full potential, we are telling others that testing is indeed just that small part.
It also does not help that a test (the noun) typically means dynamically testing code in one way or another, at least in our industry. We rarely talk about a test when we do other types of activities that are just as much testing. Even Merriam-Webster’s definition of a test — “a critical examination, observation, or evaluation” — shows that testing encompasses much more.
When we do refinement, break down a feature by discussing things like acceptance criteria, ask clarifying questions, or challenge why we need a certain detail, we don’t call that a test. When we look at metrics dashboards after a release, run static code analysis, or compile code, we don’t call that testing either. But in reality, all of these activities are aspects of testing.
So what is testing, really?
Where does this leave us? What is testing? I still argue, after all these years, that testing is all of it.
Testing is challenging assumptions and misconceptions and asking the whys, the what ifs, and the what-will -happen-whens. It's the test cases, scenarios, charters, exploration, and automation. Testing is also the code compilations, static code analysis, and security reviews along with the process of digging through logs and talking to customer support. Sometimes, testing is even the conversations you have at the coffee machine.
A test is just a way of confirming or falsifying a hypothesis — a belief or an idea. You don’t have to limit yourself. Testing is everywhere.