Brainstorming about Code Reviews

My company is currently working toward achieving CMMI Dev Maturity Level 3. An area the technical solutions working group is exploring is how to formally integrate code reviews into our software development workflows.

This post serves as a collection of thoughts I’ve had while thinking about how to get the most out of code reviews for our company given our projects, team sizes, culture, and CMMI-based requirements. Hopefully any reader, regardless of his/her position in the org chart, can pick up a few useful considerations. Here are the questions I considered:

  • What exactly is a code review?
  • What are the goals?
  • Which aspects of the code are important?
  • Who’s involved?
  • What should be measured?
  • Should there be a checklist?
  • What’s the culture of how reviews are received?
  • What should be avoided in reviews?
  • How often are reviews performed?
  • What happens to changes required after a review?
  • How do you follow up on reviews?
  • How do you train for code reviews?

With regard to team culture, it’s important that any additional overhead due to more process gets buy-in from those having to use said process. In short, coming up with the process and then mandating that it be used will likely be met with resistance.

More importantly, remember that a code review “is about improvement and not perfection.” (Source)

Questions to answer

What exactly is a code review?

I’m sure every reader of this post will have a slightly different answer to that question. The basic premise involves at least one other person reviewing the work done by another individual or team; this peer review aims to keep the overall quality of the code base high. It’s an effective way to ensure best practices are being followed, the architecture is being used correctly, and more commonly, that simple mistakes are caught before shipping the code.

Something else to consider is what’s in scope for a code review. For example, does a new stored procedure get reviewed the same way .NET code gets reviewed? By the same individuals?

What are the goals?

Having a process for process’ sake isn’t very useful; you need to consider what you hope to accomplish with having more consistent code reviews. Here are some possible goals, which can in turn help answer other questions on this list, such as what to measure.

  • Improving project quality by finding bugs before the software goes to QC
  • Ensuring maintainability by insuring compliance with coding standards
  • Sharing knowledge through exposing team members to more of the code base
  • Maintaining a consistent design and implementation

Which aspects of the code are important?

This is a non-exhaustive list (and alternatively, it may have way more than you’d want to include). The important takeaway is that you start with some definition, then be flexible to add, edit, or remove as you tune your process.
[table delimiter=”|”]
Aspect|Comments
Coding standards|This is for readability and consistency; let this be a living standard for the team; ideally let a build tool check for this so that everyone’s code is treated equally
Class, file, or method length|Typically smaller is better as the code is easier to see on one “page”
Readability|On your first review, were you able to understand and reason about the code?
Commented out code|With modern source control systems, “dead code” is noise and can potentially be confusing to read
Test coverage|Consider your project’s goals and metrics for test coverage; if there are few or no tests, this could be indicative of needed design improvements to make the code easier to test
Code left in a better state than found|a.k.a. the Boy Scout rule
Architectural patterns|e.g., classes in the correct assembly, frameworks used consistently, “magic strings” in resource files
Well-factored code|e.g., are there duplicate sections of code to pull out into a common method, can a larger method be broken down into several smaller methods?
Code smells|There are plenty of lists (I personally like this one) of such symptoms
Performance|Are performance bottlenecks addressed? Are there known performance specs for the area being reviewed?
Potential bugs|Having another set of eyes can expose both simple problems overlooked during development as well as misinterpretation of acceptance criteria
Security best practices for the project|This could be its own checklist of dos and don’ts
Sad paths|What happens if things go wrong (e.g., exceptions, missing files, null references, empty lists, server timeouts)?
Cyclomatic complexity|This is a quantitative measure of the dependent code paths for a given section of code; ideally lower complexity is better; there are tools (including Visual Studio and NDepend) that compute this
Static profiler analysis|This could include style checkers, linters, or even security scanners (e.g., HP Fortify)
[/table]

Who’s involved?

The answer may vary depending on the code under review. For example, if the change is small (e.g., cosmetic fixes, typo corrections), perhaps only one or two people should review. On the other hand, if you’re refactoring some plumbing code for performance tuning, including the team lead/architect and a few senior developers might be more prudent.

Considering code reviews can be a training tool, including developers outside your team’s area could bring a fresh perspective to other developers, and they could help you identify things you missed because of your familiarity. Junior developers could also get a learning boost both by seeing the code and by observing what feedback other developers give during the review.

Another style of work that can influence other questions in this list is whether pair (or mob) programming is involved. This technique inherently has one or more reviewers at the time of code development, so there should probably be a means of capturing useful information in whatever formal way makes sense without too much overhead/busywork.

What should be measured?

It’s out of scope for this post to discuss what defines an “effective metric.” However, if you’re not measuring your process — especially if it’s something new — how would you know if it’s working, or more importantly not working?

Here are some things that could be assessed as possible inputs:

  • Number of reviews issued per sprint
  • Amount of time spent doing code reviews
  • Number of check-ins/commits performed that have an associated code review
  • Participation score per review (i.e., number of reviews accepted and completed vs. number of reviewers invited)
  • Percentage (per release?) of reviews with comments vs. “Looks Good” (Caveat: In theory, the more comments, the more followup work that needs to be done because the quality isn’t high enough yet. However, not all comments indicate poor quality, as I’ll discuss later.)

Should there be a checklist?

Checklists are a powerful tool for making sure items don’t get overlooked, especially considering there are many qualities of the code being reviewed. For example, maybe everyone remembers the coding standards, but did someone check to make sure messages shown to the user don’t expose system details that could pose a security risk?

Although their prescriptiveness is convenient, there may be those on your team that feel handcuffed by the checklist (e.g., “Ugh, there’s so many things on this list! Do we have to do all of them?”).

Other questions that came to mind:

  • How is the checklist implemented?
  • Where do developers find it?
  • What happens once they complete the checklist (e.g., is it associated with the work in your source control system)?

What’s the culture of how reviews are received?

Reviews are not done (entirely) by automated processes: There are people involved both in reviewing code and having ones code reviewed. It’s beneficial to establish healthy expectations up front for both parties. Here are some techniques to consider in this regard, some of which work better face-to-face.

  • Ask questions (e.g., how does this method work? why was this particular implementation chosen over another?)
  • Focus on the code, not the author — none of us strives to do poorly, so don’t attack the author; the goal is to improve code quality, not to shame people
  • Compliment when things are done well (i.e., don’t just point out flaws or potential improvements)
  • Discuss in-person if there’s a potential for a lengthy back-and-forth thread on a “digital” review

What should be avoided in reviews?

As with any endeavor, you need to manage expectations. Most of this post is about what success looks like; however, it’s helpful to also describe what failure looks like. Here are some examples of what Erik Dietrich describes as anti-patterns.

  • The Gauntlet — the reviewer is essentially in front of a panel of self-important, hyper-critical peers
  • The Marathon — book a conference room for 4 hours and order a pizza
  • The Scattershot — no consistency or predictability from review to review
  • The Exam — you either pass or fail
  • The Soliloquy — the reviewer simply replies back with “looks good,” with no additional feedback
  • The Alpha Dog — one dominant developer has the final say; this ends up being a game of “appease the alpha dog” when writing code
  • The Weeds — focusing to much on minutiae and missing the big picture

How often are reviews performed?

Here are some questions that pertain to review frequency:

  • Does every check-in/commit require a review? (Something to keep in mind is that the amount of code to review should be kept “small,” however you define that. There’s nothing quite so daunting as a code review that has 34 files with the comment “implemented big feature XYZ”.)
  • Is there a code walk-through for a feature (or non-trivial section of work) before delivering to QC?
  • Will the development team (or subset of it) have a recurring meeting to review code as a group?

What happens to changes required after a review?

This question forces you to consider the logistics of rework.

  • Does the code have to pass a certain number of criteria before being committed to a shipping branch — that is, a gated code review?
  • Do you make additional tasks in your issue management system to track re-work initiated via code reviews?
  • Does rework require a follow-up review?

How do you follow up on reviews?

This is an area for which I’ve yet to come up with a clean solution, as it requires both discipline and overhead to successfully implement it.

Who watches the watchers? That is, how do you audit your code reviews to ensure that (1) they are being properly implemented, and (2) they remain effective both in terms of the goals (see above) and whether the individuals are actively part of the process? The answers to these most likely depend on how much bandwidth you’re willing to part with. Perhaps you have a handful of people in your organization that periodically observes code reviews as they are being executed. You would also likely want people who are continually learning about code quality (e.g., via conferences, articles, productivity tools) to ensure the items for review are still relevant.

Another angle to following up on reviews is aggregating trends and lessons learned.

  • Are there aspects that seem to come up as issues more frequently than others?
  • Who collates this information?
  • Where is it captured so the appropriate individuals can best utilize it?
  • Do tools exist (e.g., style checkers, linters) to help automate parts of the analysis? (This can help externalize some of the “blame.” For example, it’s more tolerable for ReSharper to complain about improper variable casing instead of being upset that Bob always points out that same problem in your code during every review.)
  • What if someone suggests an improvement (e.g., performance, code structure, better use of language features) that could benefit other teams or even other projects in the company? How does that get communicated?

How do you train for code reviews?

As people are hired on (or rotated in from other projects), how do you get them set up for success using your code review system?

  • Are there previous “golden” examples they can browse?
  • Is there a particular person on your team that is the Code Review Trainer?
  • Do you employ the Show One, Do One, Teach One model?

References

 

Time investment: This post took me 4.5 hours to write.