Recently, I read Sophie Alpert's post on code review. It resonated with me. Similar to her experience, I learned how to review code from working at Google and I agree with Sophie's assessment: on changing her team's process to review code, "it was a great decision," she writes. I'm in 100% agreement of the nine points on her bullet list of the benefits of code review.
My friend Sharadh Krishnamurthy's post on mindfulness in code review also resonates with my personal style. However, I go back and forth on this, because communications and cultures vary wildly and I have seen groups with curt comms not suffer lack of trust or hurt feelings because of them.
I wanted to list some of the ways I have seen code-review go wrong:
Massive code dump review. I regularly see senior developers dump huge code reviews with several thousand lines of delta and pages of description on their juniors. The juniors spend days reviewing, maybe add a few comments here or there, approve, and then the code goes to production and all hell breaks lose. This makes me appreciate that Differential focuses on reviewing earlier in the development of a change with the expectation of frequent iteration. Yet, GitHub is fully adequate for this. It is easy to publish a branch and begin code review in a draft. But, if imposing the burden of making developers seek review earlier forces them to optimize for the reviewer, it could be worth it to switch tools.
"Don't call us, we'll call you," code review. Poor turnaround time can cause you to miss your deliverable deadline. Imagine you're working with an open-source GitHub project (not to name names, but it's HashiCorp Terraform) and you send a PR to add a feature. That PR has been waiting on code review for a year and there have been no comments from HashiCorp on why that PR has not been reviewed yet. It's small. The author rebases his PR every few weeks and pings the PR for review to deafening silence. The problem here isn't code-review per-se, but even on a smaller scale, without measuring the amount of time code waits in review and making that metric an SLI of your software development life-cycle, code-review purgatory can hamstring one's ability to deliver (or put out a dumpster fire).
Soap-box derby review. Human reviewers can use code-review as a soap-box for irrelevant pet concerns like premature optimization. Or in the reverse, without style guides, code authors refusing to make changes for readability or other legitimate concerns. To prevent these situations, engineering teams must have shared high standards for code review. Engineering leaders must prioritize documenting what matters in code review, code style guides, common code review feedback, and measuring code review performance throughout the organization. Golang makes a good example of doing this all-in on the GitHub platform:
Disallowing machine review. Encourage automation to elevate human time and consciousness. If the engineering team doesn't value the time and creative output of its people, if instead the people are expected to repetitive tasks and toil, that's a signal to those engineers that they could just as well sleep walk through this work. Snooze, who cares. If your team isn't constantly harvesting the low-hanging fruit of automation, team members end up emulating the automatons leadership claims they don't want to pay for.