Web Development

Every Layer of Review Makes Your Team Slower

A startup ships a bug to production. Management's response: add a code review requirement. Another bug slips through. Response: require two reviewers. Then a security incident. Response: add a security review step. Then a design inconsistency. Response: add a design review. Within two years, every change — no matter how small — passes through four review stages, takes three days to merge, and the developers who were once shipping daily are now spending more time reviewing code than writing it.

This pattern is so common it's almost a law of organizational behavior: every incident creates a new review layer, and no review layer is ever removed. The result is a review process that optimizes for preventing the last incident at the cost of preventing all future progress.

The Math of Review Queues

Each review layer isn't just additive — it's multiplicative. If a single review takes an average of 4 hours to complete (not the review itself, which takes 20 minutes, but the time the PR sits in the queue waiting for a reviewer to get to it), then two sequential reviews take 8 hours. Three take 12. Four take 16.

But it's worse than that, because of context switching. A developer submits a PR, then starts new work. When review feedback arrives hours later, they have to switch back to the old work, re-load context, address the feedback, and submit again — then wait again. Each review round costs 30-60 minutes of context-switching overhead on top of the queue time.

And there's the cascading effect. If reviewer A requests changes, the developer addresses them and resubmits. Now reviewer B (who hasn't seen the PR yet) reviews and requests different changes. The developer addresses those. Now reviewer A needs to re-review to verify their feedback was addressed, but they've moved on to other work and the PR is back in the queue.

Timeline of a PR through a 3-reviewer process:
Day 1 9:00  — Developer submits PR
Day 1 14:00 — Reviewer A reviews, requests changes
Day 1 15:00 — Developer addresses feedback, resubmits
Day 2 10:00 — Reviewer B reviews, requests different changes
Day 2 11:00 — Developer addresses, resubmits
Day 2 16:00 — Reviewer A re-reviews, approves
Day 3 11:00 — Reviewer C reviews, approves
Day 3 11:30 — Reviewer B re-reviews, approves
Day 3 12:00 — PR merges
Elapsed time: ~3 business days
Actual review time: ~90 minutes total
Actual code change time: ~2 hours
Time waiting in queues: ~22 hours
Queue time is 80% of the total elapsed time.

The Quality Paradox

The assumption behind adding review layers is that more review produces better code. This is true up to a point, and then it reverses.

One thoughtful reviewer catches real issues: logic errors, missed edge cases, security problems, API design concerns. A second reviewer occasionally catches things the first missed — maybe 10-20% of the time. A third reviewer almost never catches anything the first two didn't. The marginal value of each additional reviewer drops sharply.

Meanwhile, the quality cost of slow merges is real and unaccounted for. Long-lived branches diverge from main, requiring rebases that introduce merge errors. Developers batch more changes into each PR to avoid the per-PR review overhead, making each PR larger and harder to review carefully. Reviewers suffer from fatigue — when your review queue has 15 PRs in it, you skim rather than read carefully.

The paradox: adding review layers to improve quality can actually reduce quality because it creates incentives (larger PRs, rushed reviews, stale branches) that undermine the review process itself.

What Heavy Review Processes Actually Prevent

Review processes are often justified by specific incidents. 'We shipped a bug because nobody reviewed the code.' But asking whether review would have caught a specific bug is different from asking whether a review requirement improves overall outcomes.

Studies of code review effectiveness consistently find that review catches about 60% of defects — mostly surface-level issues like naming, formatting, and obvious logic errors. Deep architectural bugs, concurrency issues, and security vulnerabilities are rarely caught by code review because they require understanding the entire system, not just the diff. The bugs that cause production incidents are disproportionately the kind that review doesn't catch.

What actually prevents production incidents is testing, monitoring, and the ability to deploy and roll back quickly. A team that ships fast with good tests, feature flags, and instant rollbacks will have fewer production incidents than a team with four review layers but no integration tests and hour-long deploy cycles.

The Right Amount of Review

Code review is valuable. One reviewer per PR, with clear expectations about what they're reviewing for, is the sweet spot for most teams. Here's what that looks like in practice.

  • One reviewer, not two or three. The first reviewer catches 80% of what review will ever catch. The second reviewer adds marginal value at significant cost. Reserve multi-reviewer processes for genuinely high-risk changes (database migrations, auth changes, public API modifications).
  • Time-box the review queue. If a PR hasn't been reviewed within 4 hours, it's a process failure, not a lazy reviewer problem. The team needs to prioritize review capacity or accept that they have more developers than their review process can support.
  • Small PRs, not big ones. A 50-line PR gets a careful review in 10 minutes. A 500-line PR gets a cursory review in 30 minutes. The 50-line PR gets better review despite less time. Size limits (200-300 lines max) are more effective at improving review quality than adding reviewers.
  • Skip review for low-risk changes. Config changes, copy updates, dependency bumps, test additions — these don't need the same scrutiny as business logic changes. Define a 'low-risk' category and allow self-merge with post-merge review.
  • Automate what machines do better. Linting, formatting, type checking, test coverage — these are review tasks that machines do faster and more consistently than humans. Don't waste reviewer attention on things that a CI check handles.

The Cultural Problem

Reducing review layers is hard because it feels like reducing safety. Nobody wants to be the person who argued for less review right before a production incident. This is a organizational culture problem, not a technical one.

The framing that helps: review is one of many safety mechanisms, and it has diminishing returns. Adding a fourth reviewer to prevent bugs is like adding a fourth padlock to prevent theft — the first lock does most of the work, and each additional lock adds inconvenience without proportional security. You wouldn't add four padlocks. Don't add four reviewers.

Teams that ship fast and break less tend to invest in the mechanisms that actually prevent incidents: comprehensive automated testing, feature flags for gradual rollout, robust monitoring with alerting, one-click rollbacks, and a blameless culture that treats incidents as learning opportunities rather than blame targets. These investments compound over time in a way that review layers don't.

Removing a Review Layer

If your team has accumulated too many review requirements, here's how to reduce them without causing panic.

Start by measuring your current process. How long do PRs take from submission to merge? How much of that time is queue time vs. active review? How many PRs are in the review queue at any given time? These numbers make the cost visible — most teams are shocked when they see that their average PR takes 3 days to merge.

Then run an experiment. For one month, require one reviewer instead of two. Track the same metrics. Did incident rates change? Did code quality (measured by defect rates, not by vibes) change? Almost always, the answer is: incidents didn't increase, quality stayed the same, and throughput improved significantly.

The goal isn't zero review — it's the minimum review that maintains quality while maximizing throughput. That minimum is almost always less than what teams are currently doing, because review layers accumulate through incident response but are never removed through process optimization. As with most things in building great software, the answer isn't more process — it's the right process.