Code review is an important part of most teams' workflows. And for good reason. The cost to fix a bug increases exponentially as you progress through the software development lifecycle.
Using code reviews and leaning on your team members' eyes and brains provides an early intervention to find bugs in your code before they make it to a deployed environment.
But knowing exactly what to look at while you're reviewing a team member's code or how to organize your thoughts can be hard. After working with many different teams who all have different priorities and values, this is how I approach reviewing a pull request.
Start with the description
As a reviewer, the very first thing I look at is the pull request (PR) description. A well-written description should give the context for the code you are about to read. It should answer the following questions:
- What does the code do (at a high level)?
- Why was the change necessary?
- What broader goals does the change help achieve?
- Is this a new feature or a bug fix?
- What piece of work is it related to?
- How do I test it?
- Does the author have any open questions?
The description primes my brain for what parts of the code to pay attention to and raises my awareness of possible consequences or implications of the change.
Read the diff
Only after I've read the description and understood the requirements and expected behavior do I look at the actual code diff. I prefer to use a side-by-side view so I can see exactly what changed, especially if I'm reviewing multiple lines at a time.
At this point, I'm focused on understanding what the code is doing. I ignore linting and formatting as much as possible – we should have tools installed in the repo for that.
When I'm reading the code, I'm trying to describe in plain English what happens at each line. If I'm pairing, I'll describe what I think is happening to my pair. If I'm alone in my office, I might narrate out loud, or I might just use my inner monologue to describe for myself what I see.
When there are parts of the code that I have to read multiple times or can't understand, I know that's an area where I'll want to add a review comment.
Make suggestions to simplify
Adding a review comment is just the first step. If the code is confusing, I also try to suggest ways to simplify it to help save the author some time. And by simplifying the code, I don't always mean making it less complex. Sometimes you need complex code to meet complex requirements.
But code should not be complicated for the sake of "elegance" or "best practice" or "design patterns."
When I'm trying to decide how to simplify the code, some questions I might ask myself include:
- Would a different or more verbose variable or method name help me understand what the code was doing?
- Would smaller method(s), class(es), or component(s) help separate concerns and make it easier to follow the code?
- Would flipping a conditional from a "double negative AND" to a "negative OR" make more sense conceptually?
- For example,
!this && !thatis equivalent to
!(this || that)
- For example,
If none of those questions help and I still find the code confusing (or I don't understand what it's doing at all), an open-ended question is a wonderful way to start a conversation in the review. Something like, "This is a little confusing to me. It looks like it might be doing X but line Y feels like it's doing something else. Could you explain?"
After I've read through the diff to understand what the code is doing and how it's doing it, I look for tests.
Every team has a different testing philosophy and opinions on test coverage, but most teams I've worked on agree that tests are important.
First, I check that critical paths in the new code also have tests added. I don't believe every code path needs a test, depending on what happens in the branch and the team's coverage expectations. But if it has business logic in it, I think it should be tested. Then, I check that the tests are well named with accurate descriptions and that their assertions are valuable.
To me, a valuable assertion can also act as documentation. Rather than asserting that a method succeeded (i.e., didn't throw an error), I want to see an expectation about the actual value or the shape of the returned object (for more long-term resilience).
I also want to see tests that cover non-happy paths. Are there expected errors we should be prepared for? Can null values be returned?
Review established patterns
Regardless of what your opinions are on code style, it's always easier to understand a consistent codebase than one that has conflicting styles and patterns. While reviewing a PR, I check for consistency with the established patterns already in the repo or new patterns that the team has agreed on.
- Are files in the correct place?
- Are variables/methods/function names cased appropriately?
- Are parameters passed consistently?
- Are constants made when they should be?
- Are utility functions or service classes made when they should be?
- Are database migrations organized appropriately (by table or separating model changes from data migration)?
- If you are managing state, are global items in global state and local items in local state?
- If you use feature flags, is the new code flagged appropriately? Boolean flags or string flags?
- If you have an internationalized application, are translations added and organized appropriately?
- If you are managing user permissions, are those handled correctly?
Conduct a smoke test
Finally, the last step before I approve a PR is to test it as a user would (or as the calling service would if this is purely an integration). Hopefully, the PR description includes steps to test that I can follow. But at the same time, I'm also trying to find any edge cases we may have missed when we were defining acceptance criteria or implementing the code.
I pull the branch down locally and run my manual test to see if all the use cases are covered. Can I force any unexpected errors? Am I able to get the UI into a broken state?
If your team is using a platform that spins up a separate environment for each branch (like Netlify deploy previews), I also love to use those to avoid bugs caused by missing or mismatching environment variables.
Share some love
When you're looking at someone else's code, don't forget to comment on the bits you found interesting, elegant, or where you learned something. I rarely have a PR that gets just an "LGTM" (looks good to me) review. I can always find a nice test, a clever variable, or good code organization to add an emoji and complimentary comment to.
Reviewing PRs is an important part of our jobs as developers. But it's hard to know where to start or what to look at, especially since every team has a different set of priorities and values.
I've found that the above approach serves me well no matter the team and creates a culture of meaningful, open feedback. Code reviews can be fun opportunities to learn from other team members, simplify code, and identify bugs before they bite us.