TL;DR — Quick Summary
Core Web Vitals are three Google metrics — LCP (loading ≤ 2.5s), INP (responsiveness ≤ 200ms), and CLS (visual stability ≤ 0.1) — that directly influence search rankings by measuring real-user experience. They became a ranking signal in June 2021, were updated in March 2024 (FID → INP), and remain unchanged in 2026. Google evaluates the 75th percentile of field data from CrUX over a rolling 28-day window. Only ~42% of mobile sites currently pass all three — optimizing is both a ranking and revenue opportunity.
What is Core Web Vitals (CWV)?
Core Web Vitals (CWV) are a set of three specific page experience metrics that Google uses as ranking signals, designed to quantify the real-world user experience of loading performance, interactivity, and visual stability. Introduced in May 2020 and updated in March 2024 (replacing FID with INP), they represent Google's answer to a fundamental question: 'Is this page fast, responsive, and stable for real users?'
Unlike synthetic benchmarks that test under ideal conditions, CWV are evaluated using field data from the Chrome User Experience Report (CrUX) — actual measurements collected from real Chrome users. Google looks at the 75th percentile (p75) of page loads over a rolling 28-day window, meaning 75% of real user experiences must meet the 'good' threshold for each metric.
The three Core Web Vitals in 2026 are:
- •Largest Contentful Paint (LCP) — Measures perceived loading speed. LCP marks the point when the largest visible content element (hero image, heading block, video poster) finishes rendering in the viewport. Target: ≤ 2.5 seconds.
- •Interaction to Next Paint (INP) — Measures responsiveness. INP captures the latency between a user interaction (click, tap, keypress) and the next visual update (the 'paint'). Unlike the retired FID (which only measured the first interaction's input delay), INP evaluates all interactions throughout the page visit and reports the worst one (at the 98th percentile for pages with many interactions). Target: ≤ 200 milliseconds.
- •Cumulative Layout Shift (CLS) — Measures visual stability. CLS quantifies how much visible content shifts unexpectedly during the entire lifespan of the page. It uses a 'session window' approach: layout shifts are grouped into sessions (max 5s duration, max 1s gap), and CLS reports the largest session window's total score. Target: ≤ 0.1.
Passing all three CWV thresholds at the p75 level is required for a page to receive the 'good page experience' designation in Google Search Console. There is no partial credit — failing even one metric results in an overall 'needs improvement' or 'poor' assessment.
CWV Thresholds
| Metric | Good | Needs Improvement | Poor |
|---|---|---|---|
| LCP (Largest Contentful Paint) | ≤ 2.5s | 2.5s – 4.0s | > 4.0s |
| INP (Interaction to Next Paint) | ≤ 200ms | 200ms – 500ms | > 500ms |
| CLS (Cumulative Layout Shift) | ≤ 0.1 | 0.1 – 0.25 | > 0.25 |
Google evaluates the 75th percentile (p75) of real-user field data over a rolling 28-day window.
History & Evolution
Google first announced Core Web Vitals in May 2020 as part of a broader initiative to quantify real-user experience on the web, led by the Chrome team's Web Vitals project. The original trio consisted of LCP, FID (First Input Delay), and CLS.
Timeline of key milestones:
- •May 2020 — Google announces Core Web Vitals as a new set of metrics for measuring user experience quality.
- •June 2021 — CWV become an official Google ranking signal as part of the Page Experience Update, rolling out to mobile search first.
- •February 2022 — Page Experience signals expand to desktop search results.
- •March 2024 — The most significant change: Google retires First Input Delay (FID) and replaces it with Interaction to Next Paint (INP). INP is a far more comprehensive responsiveness metric — it measures all interactions throughout the page visit (not just the first one) and captures the full event lifecycle (input delay + processing + presentation delay), not just input delay.
- •2025–2026 — Thresholds remain unchanged (LCP ≤ 2.5s, INP ≤ 200ms, CLS ≤ 0.1). Google has signaled interest in a potential 'smoothness' metric for animation quality, but no new CWV has been formally proposed.
The FID-to-INP transition was particularly impactful because many sites that passed FID easily began failing INP — FID only measured one interaction's input delay, while INP evaluates every interaction's full latency. This forced a fundamental rethink of JavaScript architecture and event handler optimization across the web.
How CWV is Measured
Core Web Vitals are measured through two complementary approaches: field data (Real User Monitoring) and lab data (synthetic testing). Understanding the difference is critical because Google uses field data — not lab scores — for ranking decisions.
Field Data (What Google Uses for Rankings): Field data captures CWV metrics from actual Chrome users visiting your site, aggregated in the Chrome User Experience Report (CrUX). Google evaluates the 75th percentile (p75) over a rolling 28-day window. This means 75% of real visits must meet the 'good' threshold.
Field data reflects the full diversity of real-world conditions: device types (flagship phones to budget Android), network speeds (fiber to 3G), geographic locations, browser extensions, and user behavior patterns. This makes it the most accurate representation of actual user experience — but also means it can differ significantly from lab results.
Lab Data (For Debugging & Iteration): Lab data simulates a page load in a controlled environment — specific device, specific network, specific location. It's reproducible and fast for iteration, making it ideal for development and debugging. However, lab data doesn't capture the long tail of real-world conditions.
Key Tools by Data Type:
Field Data Tools:
- •Google Search Console — CWV report showing pass/fail for all indexed pages
- •PageSpeed Insights — CrUX section ('Discover what your real users are experiencing')
- •CrUX Dashboard — Historical trends in Data Studio
- •CrUX API — Programmatic access for monitoring dashboards
Lab Data Tools:
- •Lighthouse (Chrome DevTools, CLI, or CI) — Full performance audit with scoring
- •WebPageTest — Deep waterfall analysis and filmstrip comparisons
- •Chrome DevTools Performance Panel — Detailed traces for debugging specific interactions
Both:
- •PageSpeed Insights — Shows both CrUX field data and Lighthouse lab data in one view
- •Web Vitals Extension — Real-time overlay showing CWV as you browse
The Practical Workflow:
- 1Check field data first (PSI CrUX section or Search Console) → know where you stand with Google
- 2Use lab data (Lighthouse, WebPageTest) → diagnose why metrics are failing
- 3Fix and deploy → verify improvement in lab tools
- 4Wait 28 days → verify CrUX field data reflects the improvement
Key rule: Field data (CrUX) determines Google rankings. Lab data (Lighthouse, WebPageTest) is for debugging and iteration.
Common Causes of Poor CWV Scores
Poor CWV performance has specific, diagnosable causes for each metric:
Common Causes of Poor LCP (> 2.5s):
- •Unoptimized hero images — large JPEGs/PNGs without WebP/AVIF conversion, missing width/height, no srcset
- •Slow server response time (TTFB > 800ms) — poor hosting, no CDN, uncached dynamic pages
- •Render-blocking CSS and JavaScript — synchronous CSS/JS in
<head>that delays first render - •Missing preload hints — LCP image not preloaded with
<link rel='preload'>or fetchpriority='high' - •Web font loading delays — FOUT/FOIT caused by fonts loading before text renders
- •Third-party scripts blocking the main thread during critical rendering path
Common Causes of Poor INP (> 200ms):
- •Long JavaScript tasks (> 50ms) blocking the main thread — event handlers that don't yield
- •Heavy event handlers — click/scroll/input handlers that trigger expensive DOM manipulation, layout thrashing, or synchronous API calls
- •Excessive DOM size (> 1,500 nodes) — large DOM trees slow querySelector, style recalculation, and layout
- •Unoptimized third-party scripts — analytics, chat widgets, and A/B testing tools running expensive JavaScript on every interaction
- •Client-side rendering frameworks — React/Vue/Angular re-rendering large component trees on state changes
- •Missing debouncing/throttling — scroll and input handlers firing on every pixel/keystroke
Common Causes of Poor CLS (> 0.1):
- •Images and videos without explicit width/height attributes — browser can't reserve space before loading
- •Dynamically injected content — ads, cookie banners, newsletter popups, and lazy-loaded elements pushing content down
- •Web font loading — font-display: swap without size-adjust causes text to reflow when custom fonts load
- •Late-loading CSS — stylesheets that change element dimensions after initial render
- •Iframe embeds (YouTube, Google Maps) without reserved space
- •Animations that trigger layout — animating width/height/top/left instead of transform/opacity
Diagnostic tip: Use Chrome DevTools Performance panel to identify exactly which resources and scripts cause each CWV failure. PageSpeed Insights' Opportunities and Diagnostics sections provide prioritized recommendations with estimated time savings.
Frequently Asked Questions
For step-by-step optimization, platform-specific fixes, code examples, and case studies, read our full guide:
The Ultimate Guide to Core Web Vitals: How to Pass All Metrics & Boost Rankings in 2026Struggling with CWV?
Request a free speed audit and we'll identify exactly what's holding your scores back.