Skip to content

Quiz: Core Web Vitals Diagnosis

advanced15 min read

Think Like a Performance Engineer

Performance debugging is detective work. You get a symptom — a slow metric, a user complaint, a Lighthouse flag — and you trace it back to the root cause. The difference between a senior engineer and everyone else is how fast and accurately they diagnose.

Each question below presents a real production scenario. You have four suspects. Pick the most likely root cause.

Mental Model

When diagnosing Core Web Vitals, always ask three questions in order:

  1. Which metric is failing? LCP, CLS, or INP tell you completely different stories. LCP is about loading the main content. CLS is about visual stability. INP is about responsiveness to user input.
  2. When does it happen? On initial load? After interaction? On specific devices? Field data vs lab data differences narrow the search fast.
  3. What changed? If performance regressed, the diff between the last good deploy and the current one is your strongest lead.

This diagnostic framework turns vague "it's slow" reports into focused investigations.

Key Rules
  1. 1LCP above 2.5s = poor. Check image optimization, render-blocking resources, server response time, and client-side rendering delays
  2. 2CLS above 0.1 = poor. Check missing dimensions on images/embeds, dynamically injected content, web font swaps, and top-of-page banners
  3. 3INP above 200ms = poor. Check long tasks on the main thread, heavy event handlers, forced synchronous layouts, and third-party scripts
  4. 4Field data (CrUX) reflects real users on real devices — always prioritize it over lab data (Lighthouse) when they disagree
  5. 5The 75th percentile is what Google uses for ranking — fixing the median is not enough if your tail is terrible

Scenario 1: The Slow Hero Image

A marketing landing page has an LCP of 4.2 seconds. The LCP element is a hero image (1440x800). The image is served as a PNG from the same origin. The server responds in 120ms. There are no render-blocking scripts in the head.

Quiz
What is the most likely cause of the 4.2s LCP?

Scenario 2: The Invisible Text

Users on slower connections report that a blog page shows a blank white area where the headline should be for 2-3 seconds, then text suddenly appears. LCP is 3.8 seconds. The LCP element is an h1 heading.

Quiz
What is the most likely cause?

Scenario 3: The Jumping Content

An e-commerce product page has a CLS score of 0.35. Users on mobile report that they accidentally tap the wrong buttons because content keeps shifting. The page has product images, a price section, customer reviews, and a sticky add-to-cart bar.

Quiz
What should you check first?

A product listing page with 200 items has an INP of 350ms. Users report noticeable delay when typing in the search/filter input. Each keystroke filters the visible product list.

Quiz
What is the most likely cause of the 350ms INP?

Scenario 5: The Third-Party Trap

A news article page has good lab scores (Lighthouse LCP 1.8s, CLS 0.02) but terrible field data (CrUX LCP 4.5s, CLS 0.42). The page loads several third-party scripts: analytics, ad network, social share widgets, and a consent banner.

Quiz
Why is there such a large gap between lab and field data?

Scenario 6: The Late Hydration

A Next.js e-commerce page shows server-rendered HTML almost instantly, but LCP is still measured at 3.5 seconds. The LCP element is a product image that is visible in the initial HTML. The image is optimized (WebP, responsive srcset, 45KB).

Quiz
If the image is small and present in the initial HTML, what is most likely delaying LCP?

Scenario 7: The Font Flash

A SaaS dashboard loads with system fonts, then at around 1.5 seconds, all text reflows into a custom font. Users see a noticeable flash. CLS is 0.18. The custom font is loaded via Google Fonts with a standard stylesheet link in the head.

Quiz
What is the best fix to eliminate this CLS from font swapping?

Scenario 8: The Scroll Jank

A social media feed page scrolls smoothly on desktop but has noticeable jank on mobile. The feed contains 50+ posts, each with images, like counts, and comment previews. The Performance panel shows long tasks during scroll.

Quiz
What is the most likely cause of scroll jank on mobile?

Scenario 9: The Mysterious Regression

After deploying a new feature, LCP regresses from 2.1s to 3.8s. The feature adds a small notification bell icon to the header. The icon is a 2KB SVG. No other changes were made to the page.

Quiz
How could a tiny SVG icon cause a 1.7 second LCP regression?

Scenario 10: The Mobile-Only CLS

A responsive blog page has CLS of 0.02 on desktop but 0.28 on mobile. The layout uses CSS Grid. There are no ads or dynamically injected content. Images all have width and height attributes.

Quiz
What is most likely causing CLS only on mobile?

Scenario 11: The API Waterfall

A dashboard page makes 6 sequential API calls on mount. Each call takes 200-400ms. The page shows a loading spinner for 2.5 seconds before content appears. LCP is 3.8s. The LCP element is a chart that renders after all data is loaded.

Quiz
What is the best approach to improve LCP?

Scenario 12: The Multi-Factor Nightmare

A large e-commerce homepage has all three Core Web Vitals failing: LCP 5.1s, CLS 0.31, INP 380ms. The page has a hero carousel, lazy-loaded product grids, a chat widget, and a promotional banner that appears 2 seconds after load. Chrome DevTools shows 4.2MB of JavaScript, a 1.8MB unoptimized hero image, and the main thread is blocked for 2.3 seconds during initial load.

Quiz
In what order should you prioritize fixes for maximum impact?

Scoring Guide

ScoreDiagnosis
11-12You think like a performance engineer. You could run a Core Web Vitals audit at any company.
8-10Strong diagnostic skills. Review the scenarios you missed — the explanations contain the patterns you are missing.
5-7You know the metrics but struggle with root cause analysis. Revisit the optimization playbook and practice with real PageSpeed Insights reports.
0-4Start with the Core Web Vitals fundamentals lesson. Focus on understanding what each metric actually measures before diagnosing causes.

What Separates Good From Great

The difference between knowing Core Web Vitals and being able to diagnose them is pattern recognition. Great performance engineers have seen enough production issues that they can look at a metric, a page structure, and a user complaint — and immediately narrow the suspect list to two or three possibilities.

That pattern recognition comes from practice. Run PageSpeed Insights on sites you use daily. Open the Performance panel on pages that feel slow. Read the Chrome team's case studies on web.dev. Every diagnosis you practice makes the next one faster.