§
§ · journal

Shopify INP. Under 200ms.

INP replaced FID in March 2024. How to cut Shopify INP from 400ms to under 200: long-task profiling, JS deferral, and event handler patterns that pass.

Seven patterns, under 200 milliseconds.

Interaction to Next Paint replaced First Input Delay on March 12, 2024. Where FID measured only the first input's queue time, INP measures the full latency from any interaction (click, tap, keypress) to the next visual update, across the entire page session. Google's threshold at the 75th percentile of real-user data: under 200ms is good, 200 to 500ms needs improvement, over 500ms is poor. Most Shopify stores that pass LCP still fail INP because themes built before 2024 didn't optimize for it, and apps load synchronous JavaScript that runs during interactions. Seven patterns move INP from 400ms to under 200ms: break long tasks with scheduler.yield or setTimeout, move heavy work to a Web Worker, debounce input handlers, defer 3rd-party scripts loading on every click, use optimistic UI for filter and search, render skeleton states within 50ms, and use CSS transitions instead of JavaScript animations where possible. Six to eight hours of dev time cuts INP by 150 to 300ms on a typical mid-size store.

Every click, timed to paint.

Interaction to Next Paint is the latency between a user interaction and the next visual update on screen. The interaction can be a click, tap, key press, or any other input event. The "next paint" is the first frame the browser renders after the event handler completes. Google's web.dev INP guide defines the thresholds: 200ms or less is "good," 200 to 500ms is "needs improvement," over 500ms is "poor."

INP replaced First Input Delay on March 12, 2024, becoming a core ranking signal in Google Search. FID measured only the time between the first interaction and when the browser could start processing it; INP measures the full cycle, including the event handler execution and any synchronous rendering work. Stricter and more reflective of real user experience. The replacement was announced in Chrome's developer documentation a year earlier to give teams time to optimize.

INP is the worst interaction observed on the page, not the average. A page can have 100 fast clicks and one slow filter expansion that takes 600ms; the reported INP is 600ms. This makes outliers matter: the one slow interaction users hit ruins the score. Google takes the 98th-percentile interaction below 50 total interactions, and the 75th percentile of all interactions on busier pages. The full algorithm is in the web.dev specification.

Mobile-first measurement, same as LCP. Field data via the Chrome User Experience Report, lab data via PageSpeed Insights using Total Blocking Time as a proxy. TBT under 200ms typically corresponds to INP under 200ms in field, but the mapping isn't exact; some patterns (input handlers that block but rarely fire) inflate TBT without affecting real-world INP, and vice versa.

The five long-task patterns.

Long tasks on the main thread are the root cause. A long task is any JavaScript execution over 50ms; the browser flags it because user input during a long task gets queued behind the work. The Performance Observer API surfaces long tasks via PerformanceObserver({entryTypes: ['longtask']}), which is what the Chrome DevTools Performance tab uses internally. Five patterns produce most Shopify INP failures.

First, search overlays that load synchronously when the search icon is tapped. The overlay parses templates, runs the search query, and renders results all in one synchronous block, often 200 to 400ms on mobile. Second, filter chip clicks on collection pages that recompute the entire grid in JavaScript before re-rendering. Third, add-to-cart buttons that fire 4 or 5 analytics events plus a cart-state update plus the visual confirmation, totaling 150 to 300ms of work.

Fourth, sticky header behavior on scroll that runs a JavaScript handler at every scroll event without debouncing or throttling. Each handler is fast (5 to 20ms) but they fire hundreds of times per second; cumulatively they keep the main thread busy and block any tap or click. Fifth, app-injected scripts that re-run their full setup on every product page interaction. Klaviyo's tracking script on add-to-cart, Yotpo's review trigger on filter change, Gorgias on any DOM mutation; they add 30 to 100ms each, and they stack.

The Chrome DevTools Performance tab, recorded during a real interaction (open the panel, click record, do the click you want to measure, stop record), shows the long tasks in red along the timeline. The flame chart below shows which function call inside the long task is the offender. This is the audit pattern: record, find the long task, identify the function, fix it, re-record.

Seven techniques, applied in order.

First, break long tasks into smaller chunks. The browser yields to user input between tasks, so a 200ms task split into four 50ms tasks lets a tap come through in the middle. The 2024 API is await scheduler.yield() from the Scheduler API; the older pattern is await new Promise(r => setTimeout(r, 0)). Both yield control back to the browser between chunks.

Second, move heavy computation to a Web Worker. Filtering 10,000 products by metafield, running a configurator that computes pricing across 50 variants, or generating a PDF receipt on the client: all good Worker candidates. The Worker runs off the main thread, so the UI stays responsive. MDN's Web Workers guide has the setup pattern; it's 20 lines of code for most use cases.

Third, debounce input handlers. Search-as-you-type especially: without debounce, every keystroke fires a search request. With 50ms debounce, only the keystroke after a 50ms quiet period fires; users typing fast don't trigger 8 searches for the word "shoes." Fourth, use requestIdleCallback for non-critical work like analytics events, prefetch hints, or background data warming.

Fifth, defer 3rd-party scripts, same pattern as the LCP work. Sixth, avoid synchronous fetch in event handlers; every fetch should be async and return immediately, with the UI showing a loading state. Seventh, use CSS transitions instead of JavaScript animations where the browser can do the work on the compositor thread; web.dev's animations guide covers which CSS properties (transform, opacity) animate without main-thread cost.

Visible response in 50 milliseconds.

The pattern that passes INP across every Shopify interaction: visible response in 50ms, data behind it can take longer. The user feels the click registered; the actual work happens behind that visible response without blocking the next interaction. Three before-and-after examples, all from real audits.

Search overlay, before: tap the search icon, 400ms of synchronous template parsing and result rendering, then the overlay appears. INP measured at 380 to 420ms. After: tap the search icon, CSS-only overlay slide-in animation (60ms via transform), input focus, then the result rendering happens async over the next 200ms as the user types. INP drops to 80 to 120ms. The visual feedback is instant; the async work happens during typing, which feels natural.

Filter chip click, before: tap the chip, JavaScript handler computes the new filter set, re-renders the entire product grid synchronously, then updates the URL. INP measured at 280 to 350ms. After: tap the chip, optimistic UI applies the visual filter state in 40ms via class toggle, the actual grid re-render happens in the next animation frame using requestAnimationFrame, URL update happens via History API after the render. INP drops to 100 to 140ms.

Cart mini-update on add-to-cart, before: button click, fetch to /cart/add.js, fetch to /cart.js, parse response, re-render mini-cart, fire 4 analytics events synchronously. INP at 350ms. After: button click, optimistic cart-count increment in 30ms, skeleton state for new line item, fetch /cart/add.js async, replace skeleton with real data when response arrives, analytics events fired via requestIdleCallback after the user-facing work completes. INP drops to 90ms.

The tools, and what each shows.

INP can't be measured in Lighthouse the way LCP can; Lighthouse uses Total Blocking Time as a lab proxy. Real INP measurement requires either field data via CrUX or the web-vitals JavaScript library running in production. Install web-vitals, log INP to your analytics, and you get per-session INP for every real user. That's the feedback loop for fixes.

The local-dev pattern: import web-vitals in your theme's main.js with import { onINP } from 'web-vitals', then onINP(console.log). Open the preview store, click around, watch the console log INP values as you interact. The first interaction logs nothing because INP needs at least one input to fire. By click 5 to 10 you see the worst-interaction value, which is what Google reports.

Chrome DevTools Performance tab is the deep-dive tool. Click record, perform the interaction you want to measure, stop. The "Timings" track shows the event input, the long-task track shows what was blocking, the flame chart shows which functions consumed the time. The pattern for any INP investigation: find the long task that overlaps the event, read the function names, optimize that specific code.

PageSpeed Insights still uses TBT for the lab score; TBT under 200ms maps roughly to INP under 200ms in field, but the mapping is imperfect. Trust field data over lab data for INP. Field data refreshes every 28 days in CrUX; if you ship a fix, expect 7 to 14 days before the field metric reflects it consistently. Related reading: Shopify CLS fixes covers the layout-stability metric that pairs with INP and LCP in Core Web Vitals; together the three are the full Google ranking signal.

Six answers.

What is INP and why did it replace FID?

Interaction to Next Paint = latency from a user input (click, tap, keypress) to the next visual update. Replaced First Input Delay on March 12, 2024. Stricter because it measures the full interaction-to-paint cycle, not just input delay. Threshold: under 200ms good, 200-500ms needs improvement, over 500ms poor.

Why do Shopify stores fail INP more than LCP?

Two reasons. Apps load synchronous JS that runs during interactions (search overlay open, filter click, add-to-cart). Themes built before INP became a metric in 2024 didn't optimize for it. Most stores that pass LCP still need work to pass INP.

What's the single biggest INP fix on Shopify?

Defer or remove heavy 3rd-party scripts running on every interaction. Klaviyo, Yotpo, chat widgets, and analytics scripts often add 100-300ms of long-task time during clicks. Audit via DevTools Performance tab during a real interaction. Defer or lazy-load whatever blocks.

How do I measure INP locally during development?

Use the Web Vitals JS library to log INP in the console while you click around. Chrome DevTools Performance tab shows long tasks during the recording. PSI uses Total Blocking Time (TBT) as a lab proxy; TBT under 200ms typically maps to INP under 200ms in field data.

Should I move heavy work to a Web Worker?

Yes for long-running computations like filtering large product catalogs, generating PDF receipts, or processing custom configurators. Web Workers run off the main thread, freeing it for user interaction. The setup overhead is worth it for any task taking over 50ms regularly.

How do I make filters and search feel instant?

Three patterns. Optimistic UI (show the filter applied before the data arrives). Skeleton states (visual feedback within 50ms). Debounced input handlers (50ms debounce on search). The user sees response immediately even if the actual data load takes 200ms.

Interactions feel instant.

Our Shopify speed-optimization engagements ship a 2-week Core Web Vitals overhaul: long-task profiling, INP tuning, JS deferral, Web Worker offload, and a 30-day retention report. Scoped quote in 48 hours.

Published .