When automated code reviews work — and when they don't

Automated tools tend to promise programmers and development teams faster, more efficient workflows, but relying solely on automated code review tools won’t guarantee you high-quality code.

Automated code review is the process of using software tools to automatically analyze source code for issues such as bugs, security vulnerabilities, and coding standard violations. These tools are designed to assist human reviewers by flagging potential issues that may require attention. Automated code review tools are usually run on pre-commit hooks or integrated into a continuous integration/continuous deployment (CI/CD) pipeline, allowing for near real-time feedback during the development process.

Automated code review is typically conducted using linters, automated code review suites, or a combination of the two methods. However, these tools aren't all-encompassing solutions for code quality and have limitations. In some cases, over-reliance on automated code review can even negatively impact overall code quality.

Automated code review can help speed up the code review process

Unsurprisingly, a major selling point of automated code review tools is that they offer a less time-consuming alternative to fully manual reviews. I’ve encouraged alternatives to manual code reviews in a previous article, in part because manual reviews come with significant drawbacks: they can delay code deployment, foster negative social dynamics, and induce context switching — all with limited return on investment.

Many companies recognize these challenges and turn to automated code review tools like linters and automated code review suites. These tools offer several advantages: they run quickly enough to provide near-immediate feedback to developers, they reduce negative social dynamics by providing feedback via a machine rather than a person, and they eliminate context switching.

Code linters help identify security vulnerabilities and identify inconsistencies in code style

Code linting involves automated checking of your source code for both programmatic and stylistic errors, typically using a tool known as a linter. Linters are among the simplest forms of automated code review tools.

Linters operate through static code analysis — they don't execute the code but scan it to build an Abstract Syntax Tree (AST). Then, they run checks against this AST based on a previously established set of rules.

This method is highly effective for identifying deviations in code style and conventions, which gives code produced by different developers a more consistent look and feel. Most linters offer customizable rules, allowing teams or entire companies to define their own standards for variable naming, comment formatting, spacing, and indentation.

Additionally, linters can identify obvious instances of security vulnerabilities like cross-site scripting (XSS) or SQL injections.

Some advocates of linters claim these tools can spot performance issues. However, such claims often rely on simplified examples, as real-world performance issues are generally better identified through profiling.

Linters usually work well with integrated development environments (IDEs) or can be set to run on a pre-commit hook, making their integration both seamless and cost-effective. Linters are available for virtually any programming language, and some even support multiple languages. A popular example of a JavaScript linter is ESLint.

The cost of incorporating a linter into your toolstack is minuscule. While the value it adds to the code review process may not be enormous, it's still significant: it ensures consistent code style and makes it easier to enforce coding conventions.

Automated code review suites dig a bit deeper

Many companies have recognized that automated code review tools should offer more than just linting capabilities. As a result, there's now a wide range of automated tools on the market, such as SonarQube and CodeClimate, which often support multiple programming languages — for example, SonarQube supports 29 languages.

Unlike linters, which usually run locally, these products typically operate on a server. This means there's a slight delay as the server analyzes the submitted lines of code. However, this delay is generally not long enough to require developers to switch contexts while waiting for results.

Most of these advanced tools go beyond static code analysis. They also provide reports on code duplication, unit test coverage, code complexity, and security recommendations. These insights are gained through deeper code inspection and by running unit tests against the submitted code.

The subscription costs for these tools are relatively modest and they generally offer better value compared to basic linters.

For example, most automated code review tools can detect code duplication across files in the codebase, while linters can’t do that as they usually operate on single files only. Additionally, linters usually don’t show code coverage with tests, while some projects have a requirement for a certain percentage of code to be covered with tests. Most automated code review tools show this coverage information.

Over-reliance on automated code review tools can be problematic

The primary issue I see with automated code review tools is that they can create an illusion of quality control in software development.

In 1936, Alan Turing demonstrated that no algorithm can determine whether a program will terminate for every possible input, a problem known as the “halting problem.” This limitation means that neither linters nor other automated code review tools can identify such issues. As I've discussed in an article about code review alternatives, simply reading or analyzing the code is insufficient for uncovering serious defects. Neither human reviewers nor static analysis algorithms can verify that the code meets the specifications or user requirements.

This inability to identify genuine product defects is a significant limitation of any automated code review tool. Both manual and automated reviews fall short when it comes to comprehensive quality control.

Moreover, the frequent false positives generated by these tools can lead to “warning fatigue.” The more inconsequential warnings we see, the less attention we pay to those that are actually important.

Relying solely on manual or automated code reviews for more than enforcing code style and conventions is risky. To illustrate this, compare the defects identified by these tools with those found by a skilled QA engineer.

The first step in confirming that code works as intended is to run it and test its execution against user stories. One approach to effective testing is to adopt Test-Driven Development (TDD) practices, or at the very least, to maintain robust automated test coverage.

Automated code reviews work best in combination with other methods

Code linters and other automated code review tools excel at quickly identifying simple issues within their capabilities, such as code style and formatting inconsistencies, violations of code conventions, and trivial errors. Because they operate quickly, they can be integrated into CI/CD pipelines or run locally on a pre-commit hook, providing almost instantaneous feedback.

However, relying solely on these tools for quality control can be counterproductive, and in some cases, even detrimental to the overall quality of the code.

The best results automated code reviews show in a combination with pair or mob programming, where the quality of code is assured right in the process, and automated code reviews provide additional – and almost free – level of control. Pair (or mob) programming makes manual code reviews almost redundant, and automated code reviews fill the gap really nicely.





You've successfully subscribed to Qase Blog
Great! Next, complete checkout to get full access to all premium content.
Error! Could not sign up. invalid link.
Welcome back! You've successfully signed in.
Error! Could not sign in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.