Overview
Code review is the systematic examination of source code by developers other than the original author. The practice serves multiple functions: detecting defects before they reach production, ensuring adherence to coding standards, sharing knowledge across the team, and maintaining architectural consistency. Reviews occur at various stages of development and range from informal desk checks to structured formal inspections.
The fundamental purpose extends beyond finding bugs. Reviews distribute system knowledge throughout the team, preventing information silos where only one developer understands critical components. Junior developers learn patterns and practices by observing feedback on their code and studying senior developers' implementations. The process also enforces accountability—knowing that peers will examine code encourages developers to produce higher quality work initially.
Different contexts demand different review approaches. Small teams working on prototypes might use lightweight, conversational reviews. Large organizations maintaining critical systems often require multiple reviewers and documented approval processes. Open source projects rely heavily on asynchronous, distributed reviews where contributors across time zones examine proposed changes. The challenge lies in balancing thoroughness with velocity—excessive review overhead slows development, while insufficient scrutiny allows defects to accumulate.
Modern code review integrates directly into version control workflows. Pull requests or merge requests serve as natural review checkpoints, allowing discussion and iteration before code merges into the main branch. This differs from historical practices where reviews occurred in separate meetings or required printing code listings. The shift toward continuous integration has made code review a continuous process rather than a distinct phase.
# Example: A pull request review flow
class PaymentProcessor
def process_payment(amount, currency)
# Reviewer comment: "What happens if amount is negative?"
# Reviewer comment: "Should we validate currency code?"
charge_card(amount, currency)
send_receipt
end
end
Key Principles
Focus on Code, Not People
Reviews examine the code itself, not the developer who wrote it. Comments address what the code does or fails to do, not personal attributes. The distinction matters because developers naturally feel protective of their work. Feedback phrased as "This approach creates a memory leak" differs significantly from "You created a memory leak." The former focuses on the technical issue, the latter makes it personal. Maintaining this separation requires conscious effort, especially during discussions about significant architectural problems.
Shared Code Ownership
Code review enforces the principle that the team owns the codebase collectively. Individual developers author specific changes, but the entire team bears responsibility for code quality. This shared ownership means any team member should be able to understand, modify, and maintain any part of the system. Reviews serve as the mechanism ensuring this collective understanding. When only the original author understands their code, the team has created a dangerous dependency.
Constructive Intent
Every review comment serves a constructive purpose—improving the code, sharing knowledge, or preventing future issues. Comments lacking constructive intent create friction without adding value. Nitpicking about subjective preferences wastes time and damages relationships. Effective reviewers distinguish between critical issues requiring changes and minor suggestions that improve code but remain optional. They also explain the reasoning behind feedback rather than simply declaring something wrong.
Context Awareness
Reviews must consider the broader context surrounding the change. A quick hotfix addressing a production incident requires different scrutiny than a major architectural refactoring. Time-to-market pressures, technical debt levels, and team experience all influence appropriate review standards. Reviewers who ignore context become bottlenecks, blocking reasonable compromises that balance perfection against practical constraints.
Bidirectional Learning
Code review teaches both the author and the reviewer. Authors learn from feedback about their implementation choices. Reviewers learn by studying different approaches to problems they might solve differently. Junior developers particularly benefit from reviewing senior developers' code—they observe patterns and techniques they can apply in their own work. This bidirectional knowledge transfer makes reviews valuable even when no significant issues are found.
Consistency Over Perfection
Code review enforces consistency across the codebase. Consistent style, patterns, and approaches reduce cognitive load when developers move between different parts of the system. A perfectly written but stylistically inconsistent module creates friction. Review standards should reflect the team's agreed-upon conventions rather than individual preferences. This requires documented standards that reviews enforce.
Timely Feedback
Review latency directly impacts development velocity. Delayed reviews force authors to context-switch when feedback arrives days later. Long review cycles also discourage small, frequent commits in favor of large batches—the opposite of good continuous integration practices. Teams must balance thorough reviews with responsive feedback. Automated checks catch common issues immediately, freeing reviewers to focus on logic and design.
Implementation Approaches
Pre-Commit Reviews
Pre-commit reviews occur before code merges into the main branch. Pull requests exemplify this approach—the author submits changes, reviewers examine and comment, the author addresses feedback, and the code merges once approved. This model prevents defects from entering the main branch but creates a review bottleneck. Every change waits for reviewer availability and attention.
The approach works well for teams practicing trunk-based development where the main branch must remain deployable at all times. It also suits projects with strict quality requirements where defects carry high costs. However, the synchronization overhead can slow small teams or projects with rapid iteration cycles. Teams must establish clear expectations about review response times and the number of approvals required.
# Pre-commit review workflow
# 1. Author pushes feature branch
git checkout -b feature/add-caching
# implement caching
git commit -m "Add Redis caching layer"
git push origin feature/add-caching
# 2. Create pull request
# 3. Reviewers examine code, leave comments
# 4. Author addresses feedback
git commit -m "Address review feedback: Add TTL configuration"
git push origin feature/add-caching
# 5. Approval and merge
Post-Commit Reviews
Post-commit reviews examine code after it merges into the main branch. This approach prioritizes velocity over perfect quality at merge time. Reviewers still examine changes but do so asynchronously without blocking the author's progress. Issues discovered in post-commit reviews are addressed in subsequent commits.
This model suits teams with strong automated testing, high trust levels, and contexts where the cost of bugs is relatively low. It maintains rapid iteration while still capturing the knowledge-sharing benefits of review. However, it requires discipline to address discovered issues promptly rather than letting technical debt accumulate. Post-commit reviews work best when combined with strong continuous integration and the ability to quickly revert problematic changes.
Pair Programming as Continuous Review
Pair programming provides real-time, continuous code review. Two developers work together at a single workstation—one writing code (the driver) while the other reviews each line as it's written (the navigator). The navigator catches errors immediately and suggests alternatives before the code exists in any branch. This eliminates formal review cycles entirely.
The approach offers immediate feedback and intense knowledge sharing but comes at high cost—two developers produce what one could accomplish alone. Teams typically reserve pairing for complex problems, knowledge transfer scenarios, or critical components where the intensive review justifies the overhead. Some teams use pairing for initial implementation and follow with asynchronous review of the final result.
Mob Programming
Mob programming extends pairing to include the entire team. Everyone works on the same code simultaneously, with one person driving and others navigating. This provides maximum collective review and eliminates formal review steps, but the resource cost is extreme. Mob programming suits specific scenarios: resolving complex architectural decisions, onboarding new team members, or tackling particularly difficult problems where diverse perspectives add value.
Automated Review Augmentation
Automated tools handle routine review tasks, freeing human reviewers for higher-level concerns. Static analysis tools catch style violations, common bug patterns, and security vulnerabilities automatically. These tools run on every commit, providing instant feedback without human intervention. Effective teams combine automated and human review—machines handle mechanical checks while humans evaluate design, logic, and contextual appropriateness.
# Example: RuboCop automated review configuration
# .rubocop.yml
AllCops:
NewCops: enable
TargetRubyVersion: 3.2
Style/StringLiterals:
EnforcedStyle: double_quotes
Metrics/MethodLength:
Max: 15
# This runs automatically on pull requests
# Human reviewers focus on logic, not style
Common Patterns
The Checklist Approach
Reviewers work through a structured checklist ensuring consistent coverage of important areas. The checklist includes categories like functionality, security, performance, maintainability, and testing. This pattern prevents reviewers from fixating on superficial issues while missing critical problems. Checklists also help junior reviewers understand what to examine when they lack experience identifying issues independently.
Effective checklists balance comprehensiveness with usability. An exhaustive 50-item checklist that reviewers skip because it's too cumbersome serves no purpose. Teams typically develop checklists tailored to their specific context, technology stack, and common issues. The checklist evolves as the team learns from past mistakes and changing requirements.
# Checklist application example
class UserRegistration
# Functionality: Does it handle valid inputs correctly?
# Error Handling: What happens with invalid emails?
# Security: Is the password properly hashed?
# Performance: Does it hit the database unnecessarily?
# Testing: Are edge cases covered?
def register(email, password)
return false unless valid_email?(email)
return false unless strong_password?(password)
hashed_password = BCrypt::Password.create(password)
User.create!(email: email, password: hashed_password)
rescue ActiveRecord::RecordNotUnique
# Reviewer: Good - handles duplicate emails
false
end
end
The Two-Pass Review
The first pass examines high-level design and architecture. Reviewers evaluate whether the approach solves the problem appropriately, fits the existing architecture, and makes reasonable trade-offs. They identify fundamental issues that would require significant rework. Only after approving the high-level design does the second pass occur, examining implementation details, edge cases, and code quality.
This pattern prevents wasted effort nitpicking code that might need complete redesign. It also provides better feedback to authors—architectural concerns raised early allow course correction before investing in implementation details. Large changes particularly benefit from this approach, as do contributions from developers less familiar with the codebase architecture.
The Lightweight Review
Not every change requires exhaustive examination. Simple, low-risk changes receive lighter review—often just a quick scan to verify the change makes sense and doesn't introduce obvious problems. This pattern acknowledges that review overhead should scale with risk. Configuration changes, documentation updates, and small bug fixes rarely need the same scrutiny as major features or security-sensitive modifications.
Teams define lightweight review criteria explicitly to prevent inconsistent application. Common triggers include: changes under 50 lines, changes to non-production code, changes that only affect tests, or changes from highly experienced developers working in familiar areas. Even lightweight reviews provide value through quick sanity checks and maintaining shared code awareness.
The Teaching Review
When reviewing code from less experienced developers, reviews become teaching opportunities. Reviewers explain not just what to change but why the change improves the code. They reference documentation, share examples of better patterns, and connect feedback to broader principles. This pattern turns review overhead into an investment in developer growth.
Teaching reviews take longer and require thoughtful explanation. The reviewer must understand not just that something is wrong but why the author made that choice and what knowledge gap needs filling. Effective teaching reviews avoid overwhelming authors with too much feedback—they prioritize critical lessons and allow some issues to be addressed in future iterations as the author's understanding develops.
# Teaching review example
class ReportGenerator
def generate_report(user_ids)
# Reviewer comment:
# "This loads all users into memory at once. For large datasets,
# this can cause memory issues. Consider using find_each:
#
# User.where(id: user_ids).find_each do |user|
# process_user(user)
# end
#
# find_each loads records in batches of 1000 by default,
# keeping memory usage constant regardless of dataset size.
# See: https://api.rubyonrails.org/classes/ActiveRecord/Batches.html"
users = User.where(id: user_ids).to_a
users.map { |user| process_user(user) }
end
end
The Collaborative Refinement
Rather than demanding changes, reviewers suggest alternatives and discuss trade-offs with the author. This creates dialogue rather than top-down critique. The pattern works particularly well when multiple reasonable approaches exist and choosing between them involves subjective judgment. It also helps when reviewers identify potential improvements but recognize the current implementation works adequately.
This approach requires balance—not every issue merits extended discussion. Teams typically reserve collaborative refinement for architectural decisions, significant refactoring, or situations where learning and alignment matter more than quickly merging code. For straightforward issues like fixing a bug or correcting a clear mistake, direct feedback remains more efficient.
Tools & Ecosystem
Version Control Platform Reviews
GitHub, GitLab, and Bitbucket provide integrated code review through pull requests or merge requests. These platforms display diffs, allow inline comments, support review approval workflows, and integrate with CI/CD pipelines. GitHub's pull request model has become the de facto standard—authors open PRs, request reviews from specific team members, reviewers examine changes and leave comments, and the PR merges after approval.
These platforms offer different review features. GitHub provides single-file and multi-file comment threads, suggested changes that authors can apply with one click, and required reviewers who must approve before merging. GitLab adds merge request dependencies and approval rules based on changed files. Bitbucket integrates tightly with Jira for issue tracking integration. Teams choose platforms based on their existing toolchain and specific workflow requirements.
# GitHub PR review workflow using GitHub CLI
# Create pull request
gh pr create --title "Add rate limiting" --body "Implements token bucket rate limiting"
# Request review from specific team members
gh pr edit --add-reviewer senior-dev,security-lead
# Reviewers can check out the PR locally to test
gh pr checkout 123
# After approval, merge
gh pr merge 123 --squash
Static Analysis Tools
RuboCop serves as Ruby's primary static analysis tool, checking code against configurable style guidelines and detecting common issues. The tool runs automatically in CI pipelines, failing builds when violations exceed configured thresholds. This removes style discussions from human reviews—RuboCop enforces consistency mechanically.
Reek detects code smells in Ruby—long methods, high complexity, feature envy, and other indicators of design problems. Unlike RuboCop's focus on style, Reek examines code structure and design. The tool produces false positives requiring configuration tuning, but it catches genuine design issues humans might miss. Integrating Reek into CI helps maintain design quality as codebases grow.
# Example: RuboCop detecting issues automatically
# .rubocop.yml
Metrics/CyclomaticComplexity:
Max: 8
# Running RuboCop
# $ rubocop app/services/payment_processor.rb
# Inspecting 1 file
# C
#
# Offenses:
#
# app/services/payment_processor.rb:23:3: C: Metrics/CyclomaticComplexity:
# Cyclomatic complexity for process_payment is too high. [12/8]
# def process_payment(amount, currency, method)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Brakeman analyzes Rails applications for security vulnerabilities. It identifies issues like SQL injection, command injection, unvalidated redirects, and mass assignment vulnerabilities. Running Brakeman in CI catches security issues before they reach production. The tool generates detailed reports explaining each vulnerability and suggesting fixes.
Code Coverage Tools
SimpleCov measures test coverage for Ruby projects. Coverage reports show which lines, branches, and files lack test coverage. Reviews can verify that new code includes appropriate tests by checking coverage deltas. While high coverage doesn't guarantee good tests, it identifies completely untested code that poses risk.
# SimpleCov configuration
# spec/spec_helper.rb
require 'simplecov'
SimpleCov.start 'rails' do
add_filter '/spec/'
add_filter '/config/'
add_group 'Services', 'app/services'
add_group 'Models', 'app/models'
minimum_coverage 90
refuse_coverage_drop # Fails if coverage decreases
end
Dependency Analysis
Bundler Audit checks Ruby projects for known vulnerabilities in gem dependencies. The tool compares Gemfile.lock against a database of CVEs affecting Ruby gems. Running bundle-audit in CI prevents merging code that introduces known vulnerabilities. Teams address findings either by upgrading vulnerable gems or acknowledging accepted risks with documented justification.
Documentation Generators
YARD generates Ruby documentation from inline comments. Well-documented code is easier to review because reviewers understand intent and contracts without deep code analysis. Teams can require YARD documentation for public APIs, ensuring reviews verify both code quality and documentation completeness.
Review Metrics Tools
Some teams use tools that analyze review patterns and provide metrics: average review time, comment density, approval patterns, and reviewer participation. These metrics identify bottlenecks (specific reviewers who consistently delay reviews), areas needing attention (files that consistently generate many comments), and process improvements. However, metric-driven management creates perverse incentives—developers might avoid thorough reviews to improve metrics.
Practical Examples
Security Vulnerability Review
# Original code submitted for review
class UserSearch
def search(query)
# SECURITY ISSUE: SQL injection vulnerability
User.where("name LIKE '%#{query}%'")
end
end
# Reviewer comment:
# "This code is vulnerable to SQL injection. An attacker could inject
# malicious SQL by searching for: '; DROP TABLE users; --
# Use parameterized queries instead:"
class UserSearch
def search(query)
# Fixed: Parameterized query prevents SQL injection
User.where("name LIKE ?", "%#{query}%")
end
end
The reviewer identifies a critical security vulnerability that could allow attackers to execute arbitrary SQL. The fix uses Rails' parameterized query syntax, which properly escapes user input. The review comment explains both the vulnerability and the correct approach, turning the review into a teaching moment about secure coding practices.
Performance Issue Review
# Original implementation
class OrderReport
def generate_report
# PERFORMANCE ISSUE: N+1 query
orders = Order.all
orders.map do |order|
{
order_id: order.id,
customer_name: order.customer.name, # Separate query for each order
items: order.items.map(&:name) # Separate query for each order
}
end
end
end
# Reviewer comment:
# "This code generates N+1 queries. For 100 orders, it executes 201 queries:
# 1 to fetch orders, 100 for customers, 100 for items. Use eager loading:"
class OrderReport
def generate_report
# Fixed: Eager loading reduces queries from 201 to 3
orders = Order.includes(:customer, :items).all
orders.map do |order|
{
order_id: order.id,
customer_name: order.customer.name,
items: order.items.map(&:name)
}
end
end
end
The reviewer catches a classic N+1 query problem that would cause severe performance degradation with real data volumes. The fix uses ActiveRecord's includes method to eagerly load associations, reducing database queries from hundreds to three. The comment quantifies the impact, helping the author understand the severity.
Error Handling Review
# Original code
class PaymentProcessor
def charge(amount)
# ISSUE: Insufficient error handling
gateway.charge(amount)
send_confirmation_email
update_inventory
end
end
# Reviewer comment:
# "What happens if the gateway charge fails? Current code still sends
# confirmation and updates inventory. Consider:"
class PaymentProcessor
def charge(amount)
# Fixed: Proper error handling with rollback capability
transaction do
charge_result = gateway.charge(amount)
raise PaymentFailed unless charge_result.success?
send_confirmation_email
update_inventory
charge_result
end
rescue PaymentFailed => e
log_payment_failure(e)
notify_admin(e)
raise
rescue StandardError => e
# If anything fails, ensure we refund the charge
gateway.refund(charge_result.transaction_id) if charge_result
raise
end
end
The reviewer identifies missing error handling that could lead to data inconsistency—charging customers without providing goods or services. The improved version wraps operations in a transaction, handles specific error cases, and ensures charges are refunded if subsequent operations fail.
Code Duplication Review
# Original code with duplication
class EmailService
def send_welcome_email(user)
mail = Mail.new
mail.from = "noreply@example.com"
mail.to = user.email
mail.subject = "Welcome!"
mail.body = "Welcome to our service"
mail.deliver
end
def send_password_reset(user)
mail = Mail.new
mail.from = "noreply@example.com"
mail.to = user.email
mail.subject = "Password Reset"
mail.body = "Reset your password: #{user.reset_token}"
mail.deliver
end
def send_notification(user, message)
mail = Mail.new
mail.from = "noreply@example.com"
mail.to = user.email
mail.subject = "Notification"
mail.body = message
mail.deliver
end
end
# Reviewer comment:
# "These methods duplicate email setup logic. Extract the common pattern:"
class EmailService
FROM_ADDRESS = "noreply@example.com"
def send_welcome_email(user)
send_email(user, "Welcome!", "Welcome to our service")
end
def send_password_reset(user)
send_email(user, "Password Reset", "Reset your password: #{user.reset_token}")
end
def send_notification(user, message)
send_email(user, "Notification", message)
end
private
def send_email(user, subject, body)
mail = Mail.new
mail.from = FROM_ADDRESS
mail.to = user.email
mail.subject = subject
mail.body = body
mail.deliver
end
end
The reviewer identifies repeated code patterns and suggests refactoring to eliminate duplication. The improved version extracts common logic into a private method, making the code more maintainable and reducing the surface area for bugs.
Testing Coverage Review
# Original code submitted without tests
class DiscountCalculator
def calculate_discount(order)
return 0 if order.total < 100
return order.total * 0.05 if order.total < 500
return order.total * 0.10 if order.total < 1000
order.total * 0.15
end
end
# Reviewer comment:
# "This logic has multiple branches but no tests. Add tests covering
# each tier and boundary conditions:"
# spec/services/discount_calculator_spec.rb
RSpec.describe DiscountCalculator do
describe '#calculate_discount' do
let(:calculator) { DiscountCalculator.new }
context 'when order total is below 100' do
it 'applies no discount' do
order = double(total: 99)
expect(calculator.calculate_discount(order)).to eq(0)
end
end
context 'when order total is between 100 and 499' do
it 'applies 5% discount' do
order = double(total: 100)
expect(calculator.calculate_discount(order)).to eq(5)
end
end
context 'when order total is between 500 and 999' do
it 'applies 10% discount' do
order = double(total: 500)
expect(calculator.calculate_discount(order)).to eq(50)
end
end
context 'when order total is 1000 or more' do
it 'applies 15% discount' do
order = double(total: 1000)
expect(calculator.calculate_discount(order)).to eq(150)
end
end
end
end
The reviewer requires tests for business logic before approving the change. The suggested tests cover each code path and include boundary conditions (99, 100, 500, 1000) where behavior changes. This ensures the logic works correctly and prevents regressions in future changes.
Common Pitfalls
Bikeshedding
Reviewers fixate on trivial details while ignoring significant issues. Lengthy discussions about variable naming or minor style preferences consume time better spent on logic and design. This happens because trivial issues are easy to identify and discuss—anyone can have an opinion about whether a variable should be named user_data or user_info, but spotting a subtle concurrency bug requires expertise and effort.
The solution involves discipline and prioritization. Reviewers should identify critical issues first and skip trivial matters if they don't affect functionality or maintainability. Teams can also establish explicit policies: style issues should be caught by automated tools, and naming discussions should conclude within one round of comments unless the naming is genuinely confusing.
Harsh or Personal Criticism
Feedback that attacks the developer rather than the code damages team relationships and makes developers defensive. Comments like "This is terrible" or "Did you even test this?" create antagonism without providing constructive guidance. Even technically accurate criticism delivered harshly reduces the likelihood that the author will act on the feedback.
Effective reviewers phrase feedback objectively and focus on the code: "This approach doesn't handle concurrent access" rather than "You forgot about concurrency." They explain reasoning and suggest alternatives rather than simply rejecting code. When significant rework is required, reviewers acknowledge the effort already invested and frame the feedback as helping improve the end result.
Ignoring Context
Reviewers who demand perfection regardless of circumstances become bottlenecks. A hotfix for a critical production issue might not follow all best practices, but delaying it for code cleanup while the system is down causes harm. Technical debt is sometimes an appropriate trade-off when deadlines or business requirements constrain options.
Effective reviewers consider the change's context: urgency, risk level, and constraints. They distinguish between must-fix issues that would cause serious problems and nice-to-have improvements. For pragmatic compromises, reviewers can approve the change while creating follow-up tasks to address technical debt when time permits.
Inconsistent Standards
Applying different review standards to different developers or changes creates confusion and resentment. Strict reviews for junior developers but rubber-stamping senior developers' code denies juniors the autonomy to contribute while allowing senior developers' mistakes to pass unchecked. Inconsistent enforcement of coding standards leads to a hodgepodge codebase where different sections follow different conventions.
Teams address this through documented standards that apply universally and automated tools that enforce consistency mechanically. Reviewers should apply the same criteria regardless of who authored the code. Some variation makes sense—novice contributors might receive more detailed explanations—but the standards themselves should remain constant.
Delayed Reviews
Slow review turnaround disrupts author flow and reduces development velocity. When reviews take days, authors must context-switch between multiple work items, losing the focused state that enables efficient work. Long delays also discourage small, frequent commits—developers batch changes into larger commits to avoid waiting for reviews repeatedly.
Teams establish review SLAs: initial response within a specified timeframe (often 24 hours for non-urgent changes, shorter for urgent ones). They distribute review responsibilities across the team rather than creating bottlenecks with single reviewers. Some teams use review assignment rotation or tools that automatically assign reviewers to balance workload.
Superficial Reviews
Reviewers who simply click approve without careful examination fail to provide value. This happens when review becomes a bureaucratic checkbox rather than a genuine quality gate. Superficial reviews miss bugs, allow design problems to accumulate, and waste the opportunity for knowledge sharing.
Organizations combat superficial reviews through culture rather than enforcement. Teams that value quality naturally conduct thorough reviews. Making reviews a visible part of developer contributions—recognizing good reviewers and treating review quality as part of performance evaluation—encourages meaningful engagement. However, over-monitoring creates perverse incentives where reviewers leave comments to demonstrate engagement rather than add value.
Overreliance on Review
Teams that depend entirely on code review to catch problems often ship defective code because review alone cannot guarantee correctness. Reviewers miss issues due to time pressure, fatigue, or limited context. Review works best as one layer in a comprehensive quality strategy that includes testing, static analysis, and well-designed systems that prevent entire classes of errors.
Effective teams combine multiple quality gates: automated tests verify behavior, static analysis catches common issues, reviews focus on design and logic, and integration testing validates the system works as a whole. Review catches issues that slip through other gates but should not carry the entire quality burden.
Review Theater
Changes undergo perfunctory review purely to satisfy process requirements without anyone actually examining the code carefully. This happens when organizations mandate review without creating the culture and time allocation that enables meaningful review. Developers request approval from whoever will click approve fastest, reviews happen after hours when reviewers are tired, or reviewers approve changes they don't fully understand to avoid blocking progress.
This represents a cultural rather than technical problem. Organizations must genuinely value quality, allocate time for review activities, and make review effectiveness visible. Measuring review quality through defect escape rates or post-review issues can highlight whether reviews actually improve code quality or simply add process overhead.
Reference
Review Checklist Categories
| Category | Focus Areas | Priority |
|---|---|---|
| Functionality | Correct behavior, edge cases, error conditions | Critical |
| Security | Input validation, authentication, authorization, data exposure | Critical |
| Performance | Database queries, algorithm efficiency, memory usage | High |
| Maintainability | Code clarity, duplication, documentation | High |
| Testing | Test coverage, test quality, missing scenarios | High |
| Design | Architectural fit, separation of concerns, extensibility | Medium |
| Style | Consistency with codebase conventions | Low |
Common Code Issues to Check
| Issue Type | What to Look For | Example |
|---|---|---|
| SQL Injection | String concatenation in SQL queries | User.where("id = #{id}") |
| N+1 Queries | Database queries in loops | orders.each { order.customer.name } |
| Memory Leaks | Unbounded collections, circular references | @cache = {}; @cache[key] = value # never cleaned |
| Race Conditions | Shared mutable state, check-then-act patterns | if counter == 0 then counter = 1 |
| Resource Leaks | Files or connections not closed | File.open(path).read # file not closed |
| Null/Nil Issues | Missing nil checks before method calls | user.email.downcase # crashes if email nil |
| Weak Validation | Missing input validation, type checking | amount.to_i # no validation amount is positive |
| Hard-coded Secrets | API keys, passwords in code | API_KEY = "sk_live_abc123" |
Review Response Time Guidelines
| Change Type | Initial Response Target | Approval Target |
|---|---|---|
| Critical Hotfix | 1 hour | 4 hours |
| Security Fix | 4 hours | 24 hours |
| Feature | 24 hours | 3 days |
| Refactoring | 24 hours | 1 week |
| Documentation | 48 hours | 1 week |
Ruby-Specific Review Points
| Area | What to Check | Preferred Pattern |
|---|---|---|
| String Interpolation | Using concatenation instead of interpolation | Use "Hello #{name}" not "Hello " + name |
| Array/Hash Iteration | Using each where map/select appropriate | Use map for transformations, select for filtering |
| Conditional Assignment | Using if/else for simple assignments | Use |
| Method Definition | Parentheses inconsistency | Consistent: use for params, omit for zero-arg |
| Boolean Returns | Returning truthy/falsy instead of true/false | Use !! or explicit true/false for booleans |
| Exception Handling | Rescuing StandardError implicitly | Rescue specific exceptions, avoid bare rescue |
| Symbol Usage | String keys in hashes | Use symbols for hash keys unless strings needed |
| Block Usage | Multi-line blocks with braces | Use do/end for multi-line, braces for single-line |
Review Workflow States
| State | Description | Next Actions |
|---|---|---|
| Draft | Author still working, not ready for review | Author marks ready when complete |
| Ready | Author requests review | Reviewer assigned, review begins |
| Changes Requested | Issues found requiring modification | Author addresses feedback, requests re-review |
| Approved | Review complete, no blocking issues | Code can merge |
| Merged | Change integrated into target branch | Review complete |
| Closed | Change abandoned or superseded | No further action |
Review Comment Types
| Type | Purpose | Response Required |
|---|---|---|
| Blocking | Critical issue preventing approval | Must be addressed |
| Non-blocking | Improvement suggestion but not required | Optional |
| Question | Requesting clarification | Author should respond |
| Nitpick | Minor style or preference issue | Optional |
| Praise | Recognizing good code | No response needed |
| Learning | Explaining something for education | No response needed |
Automated Tool Integration
| Tool | Purpose | When It Runs | Typical Configuration |
|---|---|---|---|
| RuboCop | Style checking | Pre-commit hook, CI | Custom rules per project |
| Reek | Code smell detection | CI | Tune to reduce false positives |
| Brakeman | Security scanning | CI | Fail build on high-severity issues |
| SimpleCov | Test coverage | Test run | Minimum coverage threshold |
| Bundler Audit | Dependency vulnerabilities | CI, scheduled | Check on every commit |
| YARD | Documentation generation | CI | Generate for public APIs |
Review Metrics to Track
| Metric | What It Measures | Typical Target |
|---|---|---|
| Review Latency | Time from request to first review | Under 24 hours |
| Approval Time | Time from request to approval | Under 3 days |
| Comment Density | Comments per 100 lines changed | 2-5 comments |
| Revision Count | Rounds of feedback before approval | 1-2 revisions |
| Approval Rate | Percentage of changes approved | 70-90% |
| Reviewer Participation | Reviews per developer per week | 3-5 reviews |