Want to shorten the QA interview process? Treat it like a smoke test.

Welcome to another article in our series on hiring and interviewing for QA roles. So far we’ve covered analyzing job descriptions and CVs and designing a good interview process.

In a previous article I suggested that the best way to verify a candidate’s proficiency in doing a certain job is to actually let them perform that job, observe the process, and check the results. However, some work tasks or projects would require a significant amount of time to be performed.

Imagine we want to hire a QA engineer. One of the requirements of the role is to prepare a QA strategy and execute it with a team. Obviously, it’s impossible to evaluate a candidate’s ability to create and execute an entire strategy doing the interview — it involves too many steps and each of them is context-specific. Understanding the scope of a project like this could take days, if not weeks. 

Asking candidates to spend a week designing a strategy for a company that has no obligation to hire them simply won’t work — it’s unfair to the candidates and would likely lead to very few applicants.

As lengthy interviews with complex take home projects aren’t logical, the remaining option is to decrease the scope of the interview process. 

In software testing, we have a similar situation: it is impossible to check all the functionality of a product during every single release of new functionality.

In QC, we design smoke test plans to cover just the most critical verification scenarios. Much like the candidate evaluation process, smoke testing doesn’t provide extensive verification coverage — but it can be performed faster than the detailed test plan and still provide a lot of valuable insights. 

Approach the interview like a smoke test plan

We’ve previously defined best practices for effective smoke testing, a simplified version would be:

  • Identify critical functionality (an app’s core purpose, key functions, and user needs)
  • Determine key functions (functions crucial to app functionality, high complexity, or frequent use)
  • Engage stakeholders (involve product managers, business analysts, or end users in smoke test design) 

Let’s explore how these smoke testing practices can translate to an efficient interview process. 

Identify critical functionality

When designing a smoke test plan, the first step is to identify the critical functionality — something that we believe the product must do. This critical functionality will be the object of the smoke test, while all other functionality will not be checked at all.

Similarly, when shortening the interview process we must reduce the scope of verification. The first step would be to determine everything that is not critical for performing the role and leave it out. This leaves us with what the candidate must be able to do, providing a more narrow focus for the interview process. 

We can equate the “critical functionality” from smoke testing an application to “qualifications” in a job description. 

For example, experience with Playwright might be nice to have, but is it a critical qualification? Would experience with other automation frameworks suffice? Or, if you already have a good SDET on the team, is automation experience strictly necessary? 

If skills are not critical to the job or can be easily obtained through on-the-job training, leave them out of the interview process. You can always make note of these skills and evaluate them during the probationary period. 

If there are critical activities that are still impossible to fit into a short interview process — such as the ability to prepare and execute a QA strategy — we need to determine how to find smoke checks for those activities. 

A smoke check might simply be a list of questions. Knowledge does not equate skill, but asking detailed questions and knowing what you’re looking for in the answers can help you judge a candidate’s abilities. 

Determine the key functions of the role 

We can equate the “key functions” from smoke testing an application to “key responsibilities” in a job description. 

Let’s take a look at a list of responsibilities from an example job description:

  1. Review requirements, specifications, and technical design documents to provide timely and meaningful feedback
  2. Create detailed, comprehensive, and well-structured test plans and test cases
  3. Estimate, prioritize, plan, and coordinate testing activities
  4. Manual testing (component, integration, functional, regression, etc.)
  5. Identify, record, document thoroughly, and track bugs
  6. Keep in touch with internal teams (e.g. developers, other QA, and managers) to identify system requirements
  7. Stay up-to-date with new testing tools and test strategies

First, we need to exclude all the responsibilities which are impossible or very hard to check during the interview. If we can’t verify something, there’s no value in trying to do so since the quality of obtained information will be very low and won’t help us make an informed decision. 

In this particular example the last two items (“stay up-to-date” or “keep in touch”) are to be excluded. However, it might be reasonable to mark these responsibilities for further verification during the probation period.

Second, we need to remove all the responsibilities which are “included” in others or assumed by others. 

In this particular set of responsibilities, we can infer that if the candidate can “create detailed, comprehensive, and well-structured test plans and test cases,” they will also be able to to “review requirements, specifications, and technical design documents.” The rationale behind this inference is that the ability to create effective test plans and cases inherently requires technical knowledge, attention to detail, analytical skills, and a deep understanding of the software's requirements and specifications.

Also, if a candidate can perform "Manual testing (component, integration, functional, regression, etc.)" and "Create detailed, comprehensive, and well-structured test plans and test cases" it's a strong indication that they possess the requisite skills to "Identify, record, document thoroughly, and track bugs." 

These two steps leave us with a much shorter list which seem to be the critical minimum of responsibilities we need to check during the interview.: 

  1. Create detailed, comprehensive, and well-structured test plans and test cases
  2. Estimate, prioritize, plan, and coordinate testing activities
  3. Manual testing (component, integration, functional, regression, etc.)

We should be certain that if the candidate can perform these responsibilities, they will meet the bare minimum requirements for the job.

Engage relevant stakeholders

Now we need to figure out the ways to quickly check the candidate’s proficiency in performing these responsibilities and involve the proper people as part of the evaluation process. 

We can equate engaging stakeholders in a smoke test to looping in the relevant stakeholders for hiring decisions. 

All responsibilities are context-specific: an employee needs to know the product, processes, people, and tech to be productive. The interview time is limited as is the amount of information we can give to the candidate. This makes us choose carefully where we want to provide context and get better verification, and where we can rely on a candidate's knowledge without our context in consideration.

These choices are best done by those who are already proficient in performing the responsibilities and therefore can design the verification process. In our example, the stakeholders would be QA engineers (or a QA lead).

For each of three remaining responsibilities in the list, can we simply ask some questions and evaluate the answers to determine if the candidate is capable of performing the activity?

For the first responsibility, “Create detailed, comprehensive, and well-structured test plans and test cases,” creating a comprehensive test plan for an average feature isn’t feasible during the interview, but we can infer that the candidate is going to be capable if they candidate creates a set of test cases for the feature we provide and explains their line of thought.

For the second responsibility, “Estimate, prioritize, plan and coordinate testing activities,” the context (our people, processes, and culture) influence the capability and productivity of the employee a lot. We cannot possibly provide the candidate a full team and a week’s worth of time to see how they'd perform planning and coordination. However, in order to perform this responsibility, the candidate must possess knowledge on the principles of estimation, planning, and coordination. We can simply come up with a set of questions we believe reveal the candidate's knowledge. For example:

  • Can you describe the methods you use to estimate testing time and resources for a new project?
  • How do you adjust your estimates when project requirements change?
  • How do you determine the priority of test cases for a project?
  • Can you give an example of a situation where you had to prioritize testing activities under tight deadlines? How did you decide what to focus on first?
  • Explain how you coordinate testing activities with other teams, such as development and product management.
  • Tell me about a time when you had to adjust your testing plan due to unexpected challenges. How did you handle it?
  • How do you manage conflicts within a team, especially when it comes to disagreements about prioritization and resources?

The third responsibility, “Manual testing (component, integration, functional, regression, etc.)” requires assessing both the skill and knowledge. We can first ask a set of questions, for example:

  • What types of manual testing are you familiar with? Which ones do you use and when?
  • Can you explain the difference between component testing and integration testing? In which scenarios would you use each?
  • Explain the difference between black-box and white-box testing. In which situations would you prefer one over the other?

We can then take the test cases the candidate prepared during the discussion of the first responsibility and ask them to do the manual testing based on those test cases/test plans.

With this approach, we should be able to verify all the critical responsibilities while keeping the interview time under two hours.

At this stage we should end up having a list of checks (or smoke checks) which all would fit in a short interview. We are aware that these checks won’t provide full confidence in the candidates abilities, but similarly to smoke testing, we trade the extensiveness of verification for time.

A good practice is to invite one or two stakeholders to the interview, so that the specialists who helped design the verification process also perform it. However, make sure the interview doesn’t feel like an exam, as exams always add stress which is not relevant to work but skews the results as people perform much worse in stressful situations.

Apply smoke testing practices to shorten interviews without sacrificing evaluation 

To recap, if you know that the verification process for all the candidate’s abilities would take too much time, lean on the principles of smoke testing to:

  • Identify the most critical functions (qualifications) of the role
  • Determine key functions (responsibilities) of the role 
  • Engage relevant stakeholders (interviewers) to evaluate candidates

Please remember that reducing the extensiveness of the verification yields less quality control, so you may want to consider adding a probationary period that allows you to evaluate the candidate while they are performing the job. 

Interested in more? Check out our ever-growing QA hiring & series.