Skip to main content
web60

SEO & PageSpeed

Your PageSpeed Insights Score Is Lying To You. Here Is What Google Actually Measures.

Graeme Conkie··9 min read
Abstract flowing teal lines moving across a warm grey background, suggesting two parallel measurements operating at different scales

A Limerick accountancy firm rang me last autumn in a quiet panic. Their marketing person had been on a webinar about Google rankings and run their site through PageSpeed Insights that morning. Mobile score: 34. Big red number on a circular gauge. They had a quote that afternoon from a "WordPress performance specialist" offering to fix it for a four-figure invoice. They wanted my opinion before they signed.

The number was real. It just was not the number they thought it was.

Six weeks and a four-figure invoice later, the mobile score sat at 89. The firm waited for their search rankings to lift. They waited for enquiries to pick up. Nothing visibly changed. They had paid, in good faith, to optimise a score that does not affect what Google does next.

I want to explain how that happens, because it happens a lot, and because the version of this story I get on the phone is almost always the same.

The Score They Spent Money Fixing Was the Wrong One

PageSpeed Insights gives you two halves of one report. People miss this constantly.

The top half, in a small grey box that most people scroll straight past, shows real-world data from Chrome users who actually visited the site over the previous 28 days. The bottom half, with the big colour-coded score on a circular gauge, shows a simulated lab test run on a single emulated device with a throttled network connection. The bottom number is what everyone screenshots and forwards. It is also the one Google does not use to rank you.

That distinction is the entire game. Lab tests tell you what might be slow in a controlled environment. Field data tells you what real customers actually experience. Google ranks on the second one.

When the firm's marketing person sent over their report, the field-data section at the top of the page was not even visible in the screenshot. It had been cropped out.

What Google Actually Uses to Rank Sites

Google calls this the Chrome User Experience Report. CrUX for short. Per the public documentation at developer.chrome.com, the dataset covers somewhere in the region of 15 million origins worldwide, and it draws from real Chrome users with telemetry enabled. A rolling 28-day window. Aggregated at the 75th percentile, which means three out of every four real user experiences need to be inside the "good" thresholds for Largest Contentful Paint, Interaction to Next Paint, and Cumulative Layout Shift.

Translate that to the firm. If three out of every four enquiry-form visitors saw the page render its main content inside two and a half seconds, the firm was passing LCP. The lab number from a simulated phone on a throttled connection is irrelevant to that calculation. Google does not see it.

This is documented openly in Google's own guidance. Web.dev's article on lab versus field data differences is unambiguous about which one drives the ranking signal, and Search Console's Core Web Vitals report draws straight from CrUX rather than from any synthetic test.

Why the Lab Score Is Intentionally Pessimistic

Now to the other side of the gap. Why is the Lighthouse number so often dramatically worse than the field-data number?

Because that is what it is designed to do.

The public Lighthouse throttling documentation on GitHub explains the "Slow 4G" configuration in detail. The simulated connection runs at roughly 1.4 megabits per second with a 562-millisecond round-trip latency. The simulated CPU runs at a fraction of a typical modern phone. The whole environment is calibrated, as Google itself describes it, to surface what the slowest 5 to 10 percent of users would experience.

That is useful. Genuinely useful for diagnostics. If your site cannot survive that scenario, an older Android on patchy 4G in rural Donegal is going to suffer. But "this is what the worst-case looks like" is a different statement to "this is how Google sees you."

Most of the firm's actual customers were Limerick professionals on city-centre broadband or modern phones on 5G. Their real experience of the site was nothing like what Lighthouse was simulating. The lab number reflected a worst-case stress test. The field number reflected reality. The contractor optimised for the stress test.

Two contrasting abstract zones side by side, one a dense ordered grid representing controlled lab conditions, the other a dispersed teal cloud representing distributed real-world users
Lab data and field data answer different questions. Google ranks on the second one.

The Bit Where I Get Slightly Annoyed

I will admit my pet hates here. Vendors who quote a score as if it were a verdict. Performance "specialists" who pull four-figure invoices for optimisations that fix a number nobody at Google looks at. Articles that tell business owners to chase a 90+ Lighthouse score without ever mentioning what is actually in their Search Console report.

I have done a version of it myself, to be fair, years ago when I should have known better. I sent a customer down a Lighthouse rabbit hole, optimising images and minifying scripts, while the real problem was a slow database query on their old host's shared infrastructure. Their time-to-first-byte was atrocious. The lab score went from 41 to 78. Their real customers still waited four seconds for the first byte. Took me longer than I care to admit to realise I had been staring at the wrong dashboard.

It is a cheap mistake to make. The lab number is loud, colourful, and on a circular gauge. The field number is in a small box at the top with grey text. Of course people look at the wrong one.

What Actually Moves Field Data

Field data shifts when something that real users experience shifts. So:

Server response time, what we call TTFB, is the start of the whole chain. If your shared host's database server is overworked at peak hours, every real user pays for it. No amount of image optimisation rescues a slow first byte.

Image weight matters because images, on a typical WordPress page, account for somewhere in the region of 55 to 60 percent of total page weight according to recent industry benchmarks, though the exact ratio varies sharply by theme and content type. Heavy hero images blow LCP straight past the threshold and you can see it land in CrUX two to three weeks later.

Render-blocking JavaScript is the other big one. Page builders that load twenty scripts before the page renders are slower in the field even on a fast device, because the device still has to parse and execute all of that code. There is no test environment in the world where loading twenty third-party scripts is fast.

And distance from the data centre matters more than people think. A customer in Donegal on 4G hitting a server in Virginia is paying for transatlantic latency on every single request. Web60's managed WordPress hosting runs in an Irish data centre with Redis object caching, Nginx page caching, and PHP-FPM tuned for the workload. The reason that matters is not the technology list. It is that an enquiry submitted by a real customer in Ireland lands on a server inside the country with a TTFB measured in tens of milliseconds rather than hundreds, and that is the difference between passing and failing LCP at the 75th percentile. For the wider context, we have written up the full performance picture for Irish business WordPress sites and the layered stack that drives the field-data numbers.

When the Lab Score Still Matters

This is the bit where I have to be honest about the other side.

If you have a brand-new site with not enough traffic to populate CrUX, you have no field data yet. Lighthouse is genuinely the only signal you have at that point. Use it. Just understand what you are using.

It is also a fine diagnostic tool when you already know something is wrong and you are trying to figure out what. A specific regression after a plugin install, a sudden LCP jump after a theme change. Lighthouse will tell you what changed faster than waiting twenty-eight days for CrUX to catch up.

The reality check: even within Lighthouse, the simulated throttling is a model, and the throttling documentation itself notes that a single slow render-blocking request can throw the simulation off in either direction. So even when you do use the lab score, treat it as a hint rather than a verdict.

What it is not, is a ranking signal. Use it as a microscope. Not a scoreboard.

What a Business Owner Should Actually Check

Search Console. The Core Web Vitals report. That is the only number that drives the ranking signal. Open it once a fortnight, see which URLs are flagged as poor or needs improvement, and only spend money on the URLs that are actually failing.

If the report says all your URLs are passing, you are fine. You can have a Lighthouse score of 47 and a passing Core Web Vitals report at the same time. Google is looking at the second number, and there is published evidence that a substantial share of Irish WordPress business sites fail mobile Core Web Vitals in the field without their owners ever noticing, precisely because they are watching the wrong dashboard.

The Limerick firm got most of their money back, by the way. The contractor was a decent operator and accepted that the brief had been the wrong brief. The firm's marketing person now opens Search Console every Monday instead of running PageSpeed tests on a Friday afternoon. Their rankings did not move because their rankings were already fine.

Conclusion

The lesson is not "Lighthouse is useless." It is that the loud, colourful number is not the one that matters. Pay attention to the report Google itself looks at when it ranks you. Check Search Console once a fortnight. Read the top half of the PageSpeed Insights report instead of the bottom half. Then decide what, if anything, actually needs fixing.

The score is not the goal. The customer's actual experience is. They are not always the same thing.

Sources

Graeme Conkie
Graeme ConkieFounder & Managing Director, Web60

Graeme Conkie founded SmartHost in 2020 and has spent years building hosting infrastructure for Irish businesses. He created Web60 after seeing the same problem repeatedly — Irish SMEs paying too much for hosting that underdelivers. He writes about WordPress infrastructure, server security, developer workflows, managed hosting strategy, and the real cost of hosting decisions for Irish business owners.

More by Graeme Conkie

Ready to get your business online?

Describe your business. AI builds your website in 60 seconds.

Build My Website Free →