How I Conduct Code Reviews
Code reviews are a key process that ensures the long-term health of a codebase and the growth of engineering teams. Over the years, I’ve refined a consistent, structured approach to code reviews that balances pragmatism with quality. Here’s how I approach reviewing code as a senior software engineer and tech lead.
1. Start with Understanding the Problem
Before diving into the code, I take a step back to understand what problem the code is trying to solve. If the pull request doesn’t clearly state this, I ask. Without knowing the intent, I can’t meaningfully assess whether the solution is appropriate. I also assess whether the engineer asking for feedback understands the problem.
Key questions I ask:
-
What is the user or system problem?
-
How is this change addressing that problem?
-
Is the solution scoped appropriately for the problem?
-
Is there a simpler or more direct way to achieve the same goal?
If you don’t understand the problem, any other effort put into changes is useless.
2. Check for Reliability
Once the problem is clear, I examine whether the code reliably solves it. That includes correctness, error handling, and defensive design.
Here’s what I look for:
-
Correctness: Does the code actually do what it claims? Are there tests that confirm this?
-
Error Handling: How does the code behave in failure scenarios. For example, network issues, bad input, and partial state? It’s very common that submitted code deals with the “happy-path” and doesn’t anticipate the various error states.
-
Robustness: Will it crash under edge cases? What if external dependencies fail?
-
Security: Does it enforce appropriate access controls? Could it be misused or abused?
-
User Mistakes: Can the system recover gracefully when a user does something unexpected?
-
Performance for Context: Is the implementation efficient enough for the expected data volume, load, or frequency of use?
This part is about making sure the software just works, and works well under the conditions it’s likely to face.
3. Evaluate Scalability
Code that works today may not work tomorrow if it doesn’t scale with growing usage or data.
During reviews, I ask about:
-
Performance Under Load: How does this behave with 10x the data or requests?
-
Resource Usage: Does it create unnecessary memory, CPU, or IO overhead?
-
Design Constraints: Are there architectural decisions (e.g., N+1 queries, tight loops, large in-memory operations) that might become bottlenecks?
-
Asynchronous Considerations: Could this be refactored to avoid blocking operations or enable batching?
Even if today’s use case is small, building with growth in mind pays dividends.
4. Assess Maintainability
The code might be correct and performant, but can others work with it easily?
Maintainability is critical, and I break it into three areas:
Operability
-
Can the system be easily monitored, debugged, and supported in production?
-
Are there clear logs, alerts, and metrics for diagnosing issues?
Simplicity
-
Is the code easy to follow, or does it hide complexity behind clever abstractions?
-
Could a new engineer understand and contribute within an hour of onboarding?
-
Are there unnecessary layers or confusing patterns that could be simplified?
Evolvability
-
Can this system adapt to future changes?
-
Is the logic modular enough to be extended or modified with minimal impact?
-
Does the change avoid tech debt that might make future enhancements painful?
Readable, clean, and thoughtful code makes the whole team faster and more confident when it’s time to change code.
Conclusion
Code review is about making thoughtful tradeoffs. My goal is to help shape code that’s robust, scalable, and easy to work with.