Lab vs. Field Web Vitals: Why Your PageSpeed Score Doesn't Match Reality
Your store scores 92 on PageSpeed Insights. Search Console says you're failing Core Web Vitals. Both are right — they measure different things. Here's how to read both and which one matters for ranking.
You run PageSpeed Insights on your homepage. The Performance score is 92, all green. You feel good about your site speed.
You log in to Search Console and the Core Web Vitals report tells you 38% of your URLs are failing. Performance is "Poor" on mobile. You feel less good.
Both reports are correct. They're measuring different things, and only one of them affects ranking.
The Two Kinds of Performance Data
Lab data (synthetic)
Lab data is what you get when a tool simulates a page load on a controlled device under controlled network conditions. PageSpeed Insights' lab section, Lighthouse, GTmetrix, WebPageTest — all of these run synthetic tests. The benefits:
- Reproducible — running the test 10 times gives similar results
- Fast — get results in 30 seconds
- Diagnostic — surfaces specific opportunities ("Reduce unused JavaScript", "Defer offscreen images")
- Available for any URL, even ones with no real traffic
The downside: lab data tests one device profile, one network condition, one moment in time. It doesn't reflect what your actual users experience on their actual devices.
Field data (RUM / CrUX)
Field data is collected from real users browsing your real site. The Chrome User Experience Report (CrUX) aggregates this data from millions of Chrome users who opted in. The benefits:
- Real — captures actual user experiences across devices, networks, and conditions
- Distributional — shows the 75th percentile, not just an average
- Session-aware — captures interactions that happen later in a session, not just initial load
- Authoritative — this is what Google uses for ranking decisions
The downside: requires real traffic (CrUX needs ~1000+ visits per origin to report data), updates monthly (28-day rolling window), and isn't actionable on its own — it tells you something is slow without telling you what to fix.
Why They Disagree
Five common reasons your lab score is better than your field score:
1. Lab tests run on a fast device profile
Default mobile profile in PageSpeed Insights is roughly equivalent to a Moto G4 with simulated 3G throttling. Real users are on a mix of devices that ranges from premium iPhones to 5-year-old Androids on rural networks. The 75th percentile of your real users is almost always slower than the lab profile.
2. Lab tests don't include third-party iframe content
Reviews widgets, chat widgets, embedded videos — many of these load in iframes that lab tests measure differently than the user-perceived load. Field data captures what users actually see; lab data may underestimate the impact.
3. Lab tests don't measure the long tail of interactions
This is the big one for INP. Lab tests synthetically click a few buttons. Real users click, scroll, swipe, expand filters, open mini-carts, and toggle variants — sometimes hundreds of interactions per session. The worst interaction (which is what INP measures) happens in real sessions, not in lab tests.
4. Cache state differs
Lab tests typically run "cold" (no cache). Real users have a mix of cold first-visits and warm return visits. Field data averages across both. If your warm performance is great but cold is terrible, lab tests will report worse than field; if your warm performance is terrible (e.g., heavy first-party JS that re-runs on every nav), lab can be better.
5. Geographic variance
Lab tests run from the test tool's server location. Field data is global. If your CDN doesn't serve well to certain regions, your field data will be dragged down by those users while lab is unaffected.
What Google Uses for Ranking
Field data. Specifically: the 75th percentile of CrUX data over the past 28 days, segmented by mobile and desktop. If 75% or more of users experience "Good" performance for a metric, your origin (or URL group) gets the "Good" assessment for that metric.
Lab data has zero direct impact on ranking. It's a diagnostic tool, not a ranking input.
This means: improve your lab score all you want, but unless field data changes, your ranking signal doesn't change. The corollary: shipping a fix that improves lab score by 15 points but doesn't affect field data does nothing for SEO.
How to Use Both Correctly
The right workflow:
- Monitor field data weekly — Search Console's Core Web Vitals report. This is your scorecard. Track the percentage of URLs in "Good" / "Needs Improvement" / "Poor" buckets over time.
- Diagnose with lab data when field data degrades — when Search Console shows a regression, run PageSpeed Insights on representative URLs to identify what changed. Lab data's "Opportunities" section tells you what to fix.
- Validate fixes with lab data, then wait for field data confirmation — after deploying a fix, lab data shows whether the fix worked technically. Field data takes 28 days to fully reflect the change because of the rolling window.
- Use real-user monitoring (RUM) for faster feedback — if you can't wait 28 days for CrUX to update, install web-vitals.js on your site and ship the data to your analytics. You'll see the impact of changes within hours instead of weeks.
The Specific Discrepancies to Watch
"Good" lab LCP, "Poor" field LCP
Almost always: third-party scripts blocking the main thread for users on slower devices, or hero images not optimized for mobile (lab profile downloads them fine; real mobile users with slower connections see longer LCP).
"Good" lab CLS, "Poor" field CLS
Lab tests don't capture late-loading elements (cookie banners, GDPR modals, "shipping to" geo-personalization). These shift layout for real users after lab tests would have ended.
Missing INP in field data
If CrUX shows no INP data but reports other metrics, it usually means your origin has too few measurable interactions to compute a percentile. This isn't a free pass — it means the data is too sparse, not that you're passing.
The Practical Audit
Run this monthly:
- Open Search Console → Core Web Vitals. Note the "Poor" URL count for mobile and desktop.
- Open PageSpeed Insights for your homepage and top 5 product pages. Compare lab vs. field. Note any large discrepancies.
- For URLs with field data worse than lab data, dig into the field data section to identify which metric is failing.
- Use the lab "Opportunities" to identify fix candidates, but only ship fixes you believe will affect field data (i.e., fixes that target real-user conditions, not lab artifacts).
- Re-measure 28 days after shipping.
The free StoreVitals Web Vitals Checker is a lab tool — useful for the diagnostic step. For field data, always go to Search Console or PageSpeed Insights' field data section. The two tools answer different questions, and the answer that matters for ranking is the field one.
Bottom Line
Lab data is a flashlight. Field data is the truth. Lab data tells you where to look; field data tells you whether you fixed anything. Don't confuse a green Lighthouse score with passing Core Web Vitals — they're related but very, very different.