Core Web Vitals Optimization Playbook
Why Your Performance Score Lies to You
Lighthouse gives you a number. That number makes you feel something — good or bad. But here's the thing most engineers miss: the lab score and the field score measure different realities. Your Lighthouse 98 means nothing if real users on a 3G connection in Mumbai are staring at a blank screen for 4 seconds.
Core Web Vitals are Google's attempt to measure what users actually experience. Not server response time. Not bundle size. Not time-to-interactive. Three specific things: how fast the main content appears (LCP), how visually stable the page is (CLS), and how responsive the page feels to interaction (INP).
This playbook is the debugging workflow I've used across dozens of production apps. For each metric, you'll learn exactly what to measure, where to look, and what to fix — in priority order.
- 1Lab data (Lighthouse) tests one device on one network. Field data (CrUX) reflects real users. Optimize for field data.
- 2Each Core Web Vital has a 'good' threshold: LCP under 2.5s, CLS under 0.1, INP under 200ms.
- 3Fix the worst metric first. A single bad vital fails the entire assessment.
- 4Measure before and after every change. Performance intuition is wrong more often than you think.
- 5The 75th percentile is what counts. Your median can be great while your p75 fails.
LCP: Making the Main Content Appear Fast
LCP measures when the largest visible element in the viewport finishes rendering. That's usually a hero image, a heading with large text, a video poster, or a background image painted via CSS.
The critical insight: LCP isn't about total page load. It's about one specific element. Find that element, and you have a targeted optimization problem instead of a vague "make it faster" situation.
Step 1: Identify Your LCP Element
Before optimizing anything, you need to know exactly which element the browser considers the LCP element. Open DevTools, go to the Performance panel, record a page load, and look for the "LCP" marker in the timings lane. Click it — it highlights the element.
You can also measure programmatically:
new PerformanceObserver((list) => {
const entries = list.getEntries();
const lastEntry = entries[entries.length - 1];
console.log('LCP element:', lastEntry.element);
console.log('LCP time:', lastEntry.startTime, 'ms');
console.log('LCP size:', lastEntry.size);
}).observe({ type: 'largest-contentful-paint', buffered: true });
Step 2: The LCP Breakdown
LCP time breaks down into four sub-parts. Each one is a separate optimization opportunity:
Step 3: Fix Each Sub-Part
TTFB (Time to First Byte):
<!-- Use a CDN. Serve from edge locations close to users. -->
<!-- Enable HTTP/2 or HTTP/3 for multiplexed connections. -->
<!-- For static pages, pre-render or use ISR: -->
// Next.js — static generation with revalidation
export const revalidate = 3600; // ISR: regenerate every hour
export default async function Page() {
const data = await fetchData(); // runs at build time + revalidation
return <HeroSection data={data} />;
}
Resource Load Delay — the sneaky one:
The browser can't download your hero image until it discovers it. If the image URL is buried in CSS or rendered by client-side JavaScript, discovery is late. Fix this with preload:
<!-- Preload the LCP image so the browser discovers it immediately -->
<link
rel="preload"
as="image"
href="/hero.webp"
fetchpriority="high"
type="image/webp"
/>
// Next.js — priority prop on next/image handles preload + fetchpriority
import Image from 'next/image';
export function HeroSection() {
return (
<Image
src="/hero.webp"
alt="Platform overview"
width={1200}
height={630}
priority // adds preload + fetchpriority="high" automatically
/>
);
}
Resource Load Duration:
// Serve correctly sized images — don't send a 4000px image for a 800px container
<Image
src="/hero.webp"
alt="Platform overview"
width={1200}
height={630}
sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 1200px"
priority
/>
Use modern formats (WebP, AVIF), enable compression (Brotli over gzip), and set proper cache headers. Next.js handles image optimization automatically — but verify your images aren't bypassing it.
Element Render Delay:
<!-- BAD: Render-blocking CSS delays everything -->
<link rel="stylesheet" href="/non-critical-styles.css" />
<!-- GOOD: Defer non-critical CSS -->
<link rel="preload" href="/non-critical-styles.css" as="style" onload="this.onload=null;this.rel='stylesheet'" />
Client-side rendering is the number one LCP killer in React apps. If your above-fold content depends on useEffect + fetch, the browser has to: download JS → parse JS → execute JS → fetch data → re-render. That's four sequential waterfalls before the user sees content. Use Server Components or getServerSideProps for above-fold content. Always.
The LCP Debugging Checklist
- Is the LCP element server-rendered (not client-rendered)?
- Is the LCP resource (image/font) discoverable in the initial HTML?
- Does the LCP image have
fetchpriority="high"or apreloadhint? - Is the image properly sized and compressed (WebP/AVIF)?
- Are there render-blocking resources delaying paint?
- Is TTFB under 800ms at the 75th percentile?
CLS: Stopping the Page from Jumping
CLS measures unexpected layout shifts — when visible elements move without user input. Every shift gets a score based on how much of the viewport moved and how far it moved. These scores accumulate throughout the page lifecycle.
The frustrating thing about CLS: it's not just a loading problem. CLS can spike 30 seconds after load when a lazy-loaded ad injects itself above the content, or when a web font finishes loading and changes text dimensions.
Imagine you're reading a newspaper and someone keeps cutting out sections and pasting them back in different spots while you read. That's CLS. Every time content shifts unexpectedly, the browser records how much moved and how far — and your score gets worse. The fix is always the same: reserve space before content loads.
The Five CLS Offenders (in Order of Frequency)
1. Images and videos without dimensions
<!-- BAD: Browser doesn't know the size until the image loads -->
<img src="/photo.jpg" alt="Team photo" />
<!-- GOOD: Browser reserves space immediately -->
<img src="/photo.jpg" alt="Team photo" width="800" height="600" />
<!-- GOOD: CSS aspect-ratio works too -->
<img src="/photo.jpg" alt="Team photo" style="aspect-ratio: 4/3; width: 100%;" />
// Next.js — next/image requires width and height (or fill) — CLS-safe by default
<Image src="/photo.jpg" alt="Team photo" width={800} height={600} />
2. Web fonts causing text reflow
When a web font loads and replaces the fallback font, text can resize — lines break differently, elements shift. The fix:
/* Use font-display: optional — if the font doesn't load in ~100ms, skip it */
@font-face {
font-family: 'Inter';
src: url('/fonts/inter.woff2') format('woff2');
font-display: optional; /* zero layout shift — fallback stays if font is slow */
}
/* Alternative: Use font-display: swap with size-adjust to match fallback metrics */
@font-face {
font-family: 'Inter';
src: url('/fonts/inter.woff2') format('woff2');
font-display: swap;
size-adjust: 107%; /* tweak until fallback and web font occupy same space */
ascent-override: 90%;
descent-override: 22%;
line-gap-override: 0%;
}
Next.js handles this automatically when you use next/font:
import { Inter } from 'next/font/google';
const inter = Inter({ subsets: ['latin'] }); // auto font-display + size-adjust
3. Dynamically injected content above existing content
// BAD: Banner appears after load, pushes everything down
function Page() {
const [showBanner, setShowBanner] = useState(false);
useEffect(() => {
fetchBannerStatus().then(setShowBanner);
}, []);
return (
<main>
{showBanner && <PromoBanner />} {/* injects above content = CLS */}
<Content />
</main>
);
}
// GOOD: Reserve space with min-height or use a placeholder
function Page() {
const [showBanner, setShowBanner] = useState(false);
useEffect(() => {
fetchBannerStatus().then(setShowBanner);
}, []);
return (
<main>
<div style={{ minHeight: showBanner ? 'auto' : '60px' }}>
{showBanner && <PromoBanner />}
</div>
<Content />
</main>
);
}
4. Late-loading ads and embeds
Always wrap ad slots and embeds in a container with explicit dimensions:
.ad-slot {
min-height: 250px; /* reserve space for the ad */
min-width: 300px;
contain: layout; /* prevent ad from affecting surrounding layout */
}
5. Animations that trigger layout
/* BAD: Animating height causes layout shift */
.accordion-content {
transition: height 0.3s ease;
}
/* GOOD: Use transform for expand/collapse — no layout shift */
.accordion-content {
transition: transform 0.3s ease;
transform-origin: top;
transform: scaleY(0);
}
.accordion-content.open {
transform: scaleY(1);
}
/* BETTER: Use grid for smooth height animation without layout shift */
.accordion-content {
display: grid;
grid-template-rows: 0fr;
transition: grid-template-rows 0.3s ease;
}
.accordion-content.open {
grid-template-rows: 1fr;
}
.accordion-content > div {
overflow: hidden;
}
INP: Making Interactions Feel Instant
INP replaced First Input Delay (FID) in March 2024, and it's a fundamentally harder metric to pass. FID only measured the delay of the first interaction. INP measures the worst interaction across the entire page lifecycle — the slowest click, tap, or keypress, from input to the next paint.
Here's why INP catches so many apps off guard: FID usually passed because the first click happened after hydration. INP fails because that one heavy dropdown, that one complex filter, that one accordion with 200 items — those are the interactions that take 400ms+ to paint.
The INP Timeline
When a user clicks a button, three phases determine the total INP time:
Fix 1: Break Long Tasks with Yielding
The browser can't respond to user input while a long task (50ms+) occupies the main thread. The fix is yielding — breaking your work into smaller chunks so the browser can process input between them.
// BAD: One long task blocks the main thread for 200ms
function processAllItems(items) {
for (const item of items) {
heavyComputation(item); // 200ms total, zero yield points
}
updateUI();
}
// GOOD: Yield to the browser between chunks
async function processAllItems(items) {
const CHUNK_SIZE = 10;
for (let i = 0; i < items.length; i += CHUNK_SIZE) {
const chunk = items.slice(i, i + CHUNK_SIZE);
for (const item of chunk) {
heavyComputation(item);
}
// Yield to the browser — let it process pending input
await scheduler.yield();
}
updateUI();
}
scheduler.yield() is the modern API for yielding. It tells the browser "I have more work, but check for user input first." If the browser doesn't support it, use this fallback:
function yieldToMain() {
if ('scheduler' in globalThis && 'yield' in scheduler) {
return scheduler.yield();
}
return new Promise((resolve) => {
setTimeout(resolve, 0);
});
}
Why setTimeout(0) is not the same as scheduler.yield()
setTimeout(resolve, 0) yields to the main thread, but it puts your continuation in the task queue — behind any other pending tasks. If ten other tasks are queued, you wait for all of them. scheduler.yield() puts your continuation at the front of the queue, so you resume as soon as the browser finishes handling input. This means scheduler.yield() gives the user a chance to interact without losing your place in line.
Fix 2: Debounce and Throttle Event Handlers
High-frequency events like input, scroll, and pointermove can fire dozens of times per second. If each handler triggers expensive work, INP suffers:
// BAD: Every keystroke triggers a full search + re-render
searchInput.addEventListener('input', (e) => {
const results = searchDatabase(e.target.value); // 80ms
renderResults(results); // 40ms
// 120ms per keystroke — INP: 120ms per interaction
});
// GOOD: Debounce — only process after user pauses typing
let debounceTimer;
searchInput.addEventListener('input', (e) => {
clearTimeout(debounceTimer);
debounceTimer = setTimeout(() => {
const results = searchDatabase(e.target.value);
renderResults(results);
}, 150); // wait 150ms after last keystroke
});
In React, use useDeferredValue for search-style patterns — it lets React deprioritize the expensive render without blocking the input:
function SearchResults({ query }: { query: string }) {
const deferredQuery = useDeferredValue(query);
const results = searchDatabase(deferredQuery);
return <ResultsList results={results} />;
}
Fix 3: Use CSS for Animations, Not JavaScript
Every JavaScript-driven animation runs on the main thread. Every CSS animation using transform or opacity runs on the compositor thread — completely off the main thread.
/* GOOD: Compositor-only properties — zero main thread work */
.dropdown {
transform: scaleY(0);
opacity: 0;
transition: transform 200ms ease, opacity 200ms ease;
transform-origin: top;
}
.dropdown.open {
transform: scaleY(1);
opacity: 1;
}
// BAD: JavaScript animation blocks the main thread during interactions
function openDropdown(el) {
let progress = 0;
function frame() {
progress += 0.05;
el.style.height = (progress * 300) + 'px'; // layout trigger every frame
if (progress < 1) requestAnimationFrame(frame);
}
requestAnimationFrame(frame);
}
Fix 4: Minimize Hydration Cost
In React/Next.js apps, hydration is often the biggest long task on the main thread. While hydration runs, users can see the page but can't interact with it — clicks are delayed until hydration finishes.
// BAD: Heavy component hydrates on page load even if user doesn't interact
import { HeavyEditor } from './HeavyEditor';
export default function Page() {
return (
<main>
<HeroSection />
<HeavyEditor /> {/* 500KB component, hydrates immediately */}
</main>
);
}
// GOOD: Lazy-load and defer hydration of heavy interactive components
import dynamic from 'next/dynamic';
const HeavyEditor = dynamic(() => import('./HeavyEditor'), {
ssr: false, // don't server-render — load only when needed
loading: () => <EditorSkeleton />,
});
For React Server Components, the server-rendered HTML is interactive without hydration for non-interactive parts. Only 'use client' components need hydration. Keep client components small and push them to the leaves of your component tree.
The Complete Debugging Workflow
When a Core Web Vital fails in the field, follow this workflow:
1. Get Field Data First
# Check your CrUX data via PageSpeed Insights API
curl "https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=YOUR_URL&strategy=mobile&category=performance"
Or use the CrUX Dashboard for trends over time. Field data shows you what real users experience — not what your M3 MacBook on fiber internet experiences.
2. Reproduce in Lab Conditions
For LCP and CLS, use Lighthouse with throttling:
- DevTools → Lighthouse tab → Mobile → Performance
- Enable CPU throttling (4x slowdown) and network throttling (Slow 4G)
For INP, you need real interaction:
- DevTools → Performance tab → Record → Interact with the page → Stop
- Look for long tasks (red corners on task bars) during interactions
3. Use the Web Vitals Extension
The Web Vitals Chrome extension gives you real-time CWV measurements as you browse. Enable console logging for detailed breakdowns:
// Add to your app for detailed CWV logging in development
import { onLCP, onCLS, onINP } from 'web-vitals';
onLCP(console.log);
onCLS(console.log);
onINP(console.log);
4. Fix in Priority Order
| What developers do | What they should do |
|---|---|
| Optimizing Lighthouse score on a fast laptop and assuming field data will be good Lab data tests ideal conditions. Field data reflects real users on real devices and networks. A Lighthouse 100 means nothing if p75 users on Android have 4s LCP. | Always check CrUX field data (75th percentile) as the source of truth |
| Using client-side fetch for above-fold content, causing blank screen until JS loads Client rendering adds sequential waterfalls: download JS, parse, execute, fetch data, re-render. Server rendering sends the final HTML immediately — no waterfalls. | Server-render above-fold content with RSC or static generation |
| Lazy-loading the LCP image with loading=lazy loading=lazy defers the download until the image is near the viewport. For the LCP element, you want the opposite — download it as early as possible. Lazy-loading LCP images is a common mistake that directly increases LCP time. | Use fetchpriority=high and priority prop on the LCP image — never lazy-load it |
| Adding images without width and height attributes Without dimensions, the browser allocates zero space until the image loads, then shifts everything when it renders. This is the number one cause of CLS in most apps. | Always include width, height, or aspect-ratio on images and media |
| Running heavy computation in event handlers without yielding A 200ms synchronous event handler blocks the main thread for 200ms — the user sees a frozen UI. Yielding between chunks lets the browser paint updates and handle other input. | Break work into chunks with scheduler.yield() or use useDeferredValue for React state |
| Assuming font-display: swap prevents layout shift swap explicitly causes a font swap, which changes text metrics and causes layout shift. optional keeps the fallback if the font is slow, guaranteeing zero shift. | Use font-display: optional or next/font with automatic size-adjust |
Real-World Optimization Patterns
Pattern: The Optimized Hero Section
This pattern combines every LCP optimization into one component:
import Image from 'next/image';
import { Inter } from 'next/font/google';
const inter = Inter({ subsets: ['latin'] });
export default function HeroSection() {
return (
<section className={inter.className}>
<h1>Ship faster, learn deeper</h1>
<p>The engineering platform for ambitious developers.</p>
<Image
src="/hero.avif"
alt="Platform dashboard showing course progress"
width={1200}
height={630}
sizes="100vw"
priority
/>
</section>
);
}
What this does right:
- Server Component (no
'use client') — renders on the server, zero hydration next/font— zero-CLS font loading with automaticsize-adjustpriorityon Image — addspreload+fetchpriority="high"sizes="100vw"— browser picks the right image size from the srcset- AVIF format — smallest file size for photographic images
Pattern: The INP-Safe Interactive List
'use client';
import { useState, useDeferredValue, useMemo } from 'react';
export function FilterableList({ items }: { items: Item[] }) {
const [query, setQuery] = useState('');
const deferredQuery = useDeferredValue(query);
const filtered = useMemo(
() => items.filter((item) =>
item.name.toLowerCase().includes(deferredQuery.toLowerCase())
),
[items, deferredQuery]
);
const isStale = query !== deferredQuery;
return (
<div>
<input
type="search"
value={query}
onChange={(e) => setQuery(e.target.value)}
placeholder="Filter items..."
/>
<ul style={{ opacity: isStale ? 0.7 : 1, transition: 'opacity 150ms' }}>
{filtered.map((item) => (
<li key={item.id}>{item.name}</li>
))}
</ul>
</div>
);
}
What this does right:
useDeferredValue— input stays responsive while the filtered list renders at lower priorityuseMemo— avoids re-filtering when other state changes- Visual stale indicator (opacity) — user knows the list is updating
transitionon opacity — compositor-only, zero main thread cost
- 1Identify your LCP element before optimizing. Use PerformanceObserver or DevTools Performance panel.
- 2Never lazy-load the LCP image. Use fetchpriority=high and preload instead.
- 3Use font-display: optional or next/font for zero-CLS font loading.
- 4Always set width and height (or aspect-ratio) on images, videos, and embeds.
- 5Break event handlers over 50ms into chunks with scheduler.yield() to fix INP.
- 6Server-render above-fold content. Client-side rendering for above-fold is the top LCP killer.
- 7Measure field data (CrUX p75), not just lab data (Lighthouse). Field data is what Google ranks you on.
- 8Use CSS transform and opacity for animations — they run on the compositor thread, off the main thread.