SECURITY2023-08-25BY STORECODE

Security reviews that don’t block shipping

How we changed security reviews so they protect users without turning into unpredictable roadblocks.

securityreviewsdeliveryrisk

Security reviews have a bad reputation in many teams.

They’re seen as a final boss: you build the thing, then you throw it over a wall and hope the review doesn’t send you back to the start.

We had our share of painful reviews:

  • last-minute surprises about data flows
  • vague "this feels risky" feedback without concrete next steps
  • features held back for weeks because the queue was full

We wanted something different: security reviews that made risks clearer, helped us design safer systems, and still let us ship.

Constraints

  • We didn’t have a large central security team.
  • Product teams owned most implementation work.
  • Not every change warranted the same level of scrutiny.

What we changed

1. Define which changes need a security review

We made a short list of triggers:

  • new ways of handling authentication or authorization
  • changes to how we store or transmit sensitive data
  • new external integrations with access to important data or actions

If a change didn’t hit any of these, it usually didn’t need a dedicated security review beyond normal code review and tests.

This reduced noise and focused attention where it mattered.

2. Start reviews earlier

We moved security reviews into the design phase.

Design docs for risky changes now include a short "security considerations" section:

  • what data is involved
  • who can access new capabilities
  • how failure modes might be abused

Security reviewers look at this section before implementation is locked in.

This made it easier to suggest safer patterns (e.g., different data partitioning, stricter scopes) without rewrites.

3. Use checklists instead of open-ended questions

We built small, focused checklists for common patterns:

  • new admin tools
  • background jobs that touch sensitive data
  • user-facing flows with elevated access

Reviews use these checklists to structure feedback:

  • "Have we logged access to this operation?"
  • "Is the minimum necessary data exposed?"
  • "Can this action be rate-limited or audited?"

This kept reviews from turning into unstructured "what else could go wrong?" sessions.

4. Make outcomes concrete

Every security review ends with one of a few clear outcomes:

  • Approved as designed.
  • Approved with follow-ups. Documented tasks that can ship shortly after.
  • Blocker. Specific reasons the change cannot go live yet.

Blockers must point to:

  • an explicit requirement (policy, regulation, or documented practice)
  • concrete changes that will resolve the issue

This made "no" easier to understand and act on.

5. Track review load and timelines

We started tracking:

  • how many reviews we did per quarter
  • how long they took from request to decision
  • how often we found late-breaking issues vs. design-stage issues

This helped us spot bottlenecks and adjust staffing or scope.

Results / Measurements

After several quarters:

  • more reviews happened during design, when changes were cheaper
  • fewer features were delayed by "surprise" security issues late in the cycle
  • product teams reported that checklists made it easier to prepare for reviews

We still had disagreements about trade-offs.

The difference was that those disagreements happened earlier and with more shared context.

Takeaways

  • Security reviews work best when they start at design time, not at the end.
  • Clear triggers focus review effort on changes that can meaningfully increase risk.
  • Checklists and concrete outcomes reduce ambiguity and help teams act on feedback.
  • Tracking review load and timing keeps the process from quietly turning into a bottleneck.

Further reading