§
§ · free tool

Core Web Vitals, live.

LCP, INP, CLS, FCP, TTFB. 28-day p75 field data from real Chrome users (CrUX) plus a lab run for diagnosis. Mobile + desktop. Runs on Google PageSpeed Insights; no signup, no email gate.

28-day p75 field data from the Chrome User Experience Report (CrUX) for LCP, INP, CLS, FCP, and TTFB, plus a lab run for diagnosis. The form below sends your URL to Google's public PageSpeed Insights v5 API and renders both data sources side-by-side. Mobile + desktop. Typical run: 25 to 50 seconds.

Origin URLs (example.com) typically have CrUX data even when sub-pages don't.

Try:

Privacy: the URL is sent only to Google's public PageSpeed Insights API. Zero server state on Digital Heroes. Field data is aggregate p75 across 28 days; no individual user data is exposed.

§ 02 · what each metric measures

Five numbers. Five questions.

LCP · Largest Contentful Paint answers "how long until the user can read the page?" Google's threshold is 2.5 seconds at the 75th percentile. The largest above-the-fold element (usually the hero image or H1 block) defines LCP. Common fixes: serve the hero in WebP/AVIF, use `fetchpriority="high"` on the hero ``, preload the hero font, move third-party scripts below ``.

INP · Interaction to Next Paint answers "how responsive does the page feel?" Threshold is 200 milliseconds; over 500ms is failing. INP captures the worst (or near-worst) interaction across the page lifecycle, so heavy main-thread work during scroll, click, or input shows up here. Common fixes: break long JavaScript tasks into chunks with `requestIdleCallback`, defer non-critical event listeners, audit third-party scripts that hijack the main thread.

CLS · Cumulative Layout Shift answers "does the page jump around?" Threshold is 0.1. The most common culprits: web fonts swapping in (use `font-display: optional` or preload), images without explicit width/height (set the attributes or use `aspect-ratio`), and ads/embeds inserted after first paint (reserve space with a min-height container). A CLS over 0.1 almost always traces to one of these three.

FCP · First Contentful Paint answers "when did anything appear?" Threshold is 1.8 seconds. FCP is supplementary — passing FCP without passing LCP means the page paints fast but the meaningful content takes a long time to arrive. Treat FCP as a debugging signal for "is there something rendering early?"

TTFB · Time to First Byte answers "how long to receive the first byte from the server?" Target under 800ms. TTFB measures server-render plus network; a slow TTFB caps every other metric because nothing else can start until the first byte arrives. Fixes: cache HTML at the edge for anonymous traffic, use SSR with streaming, move from a single-region origin to a global CDN.

§ 03 · fix path per metric

When the band flips amber.

The fix path depends on which metric is failing. Performance optimization is rarely "make everything faster" — it is "find the one bottleneck and unblock it". The lab data audits in our Lighthouse Score Checker tell you which audit hurts which metric.

If LCP fails: identify the LCP element (use Chrome DevTools Performance panel + the Largest Contentful Paint marker). Optimize that single element. For Shopify, the LCP element is typically the hero image; converting it to WebP and adding `fetchpriority="high"` on a single `` tag can cut LCP by 30-50% with no other changes. For Next.js, use the `Image` component with `priority` set.

If INP fails: profile the worst interaction in DevTools Performance with the "Web Vitals" overlay enabled. The fix is usually breaking up a single long JavaScript task (over 50ms) into smaller chunks. The web.dev INP optimization guide documents the patterns. For Shopify themes, INP often fails because of cart-drawer, search-overlay, or popup widgets that synchronously query the DOM on input.

If CLS fails: open the page in Chrome DevTools, switch to Performance tab, record a load, and look for the Layout Shift markers. The shift will trace to one of: a font swap (fix with `font-display: optional` or preloading), an image without dimensions (set `width`/`height` attributes), or an embed/ad inserted late (reserve a min-height container). One fix usually drops CLS by 80%.

If TTFB fails: this is server-side. For Shopify, TTFB is mostly out of your hands but Online Store 2.0 themes have a 100-200ms baseline you can control via section liquid efficiency. For custom builds, check origin location (move to a closer region), enable HTML edge caching for anonymous users, and turn on streaming SSR if the framework supports it.

§ 04 · questions

Six questions users ask.

What are Core Web Vitals?

Core Web Vitals are the three user-experience metrics Google uses as a ranking signal: Largest Contentful Paint (LCP) measures loading speed and should be under 2.5 seconds at the 75th percentile; Interaction to Next Paint (INP) measures responsiveness and should be under 200 milliseconds; Cumulative Layout Shift (CLS) measures visual stability and should be under 0.1. Two supplementary metrics are also reported: First Contentful Paint (FCP, target under 1.8s) and Time to First Byte (TTFB, target under 800ms).

Field data vs lab data — which one ranks me?

Field data (CrUX) is what Google uses as a ranking signal. It is the 28-day p75 (75th-percentile) of real Chrome user measurements collected with the user's consent. Lab data (Lighthouse) is a single simulated run on Google's emulated mid-tier device and is excellent for diagnosing what to fix. The two often disagree — lab can show 'good' while field shows 'needs improvement' if your real users are on slower devices than the emulator. Always optimize for field.

What does 'No CrUX data available' mean?

CrUX requires a minimum number of opted-in Chrome users to visit your URL or origin in the last 28 days before Google publishes the data. Lower-traffic pages and sites under roughly 10K visits per month often show 'No data'. The lab run still works in that case; for field data, run the origin URL (e.g., example.com instead of example.com/about/) — origin-level data has a lower threshold than URL-level.

INP replaced FID. What changed?

Interaction to Next Paint replaced First Input Delay as a Core Web Vital in March 2024. FID measured only the delay before the browser started processing the first interaction; INP measures the full duration from interaction to the next paint, across every interaction throughout the page lifecycle, and reports the worst (or near-worst) experience. INP is harder to pass than FID was — many sites that comfortably scored 'good' on FID now show 'needs improvement' on INP.

Does this tool store my URL?

No. The URL you enter is sent only to Google's public PageSpeed Insights API. Nothing is logged on Digital Heroes servers. The 'recent scans' panel uses your browser's localStorage and stays on your device. No signup, no email collection, no analytics beacon.

Why is my CLS bad if the page looks stable?

CLS catches layout shifts during the entire page session, not just the first paint. The most common culprits: web fonts swapping in (use font-display: optional or preload the font), images without explicit width/height (set the attributes or use aspect-ratio), and ads or embeds inserted after first paint (reserve space with a min-height container). A CLS over 0.1 almost always traces to one of these three.

§ 06 · need a real engagement

CWV failing? Two-week fix.

A 30-minute call covers the failing metric, the engineering required, and a fixed-price quote. CWV improvements compound across ranking and conversion.